Anushka SIP Report
Anushka SIP Report
ON
A Study of Artificial Intelligence: Comparison Between
AMD and NVIDIA
1
ACKNOWLEDGEMENT
I would like to gratefully acknowledge the contribution of all the people who took active
part and provided valuable support to me during the course of this project to begin with;
I would like to offer my sincere thanks to MS. AASTHA BEHL my internal mentor.
Without her guidance, support and valuable suggestions during the project, it would not
have been accomplished.
I also sincerely thanks to my friends and family who provided valuable suggestions
shared their rich experience and helped me script the exact requisites.
ANUSHKA MISHRA
B.Com (Hons)
01714188822
2
DECLARATION
I ANUSHKA MISHRA, hereby declare that the minor project report on the topic,
“Artificial Intelligence: Comparison of AMD and NVIDIA” is an original piece of research
work done by me. I have specified by the mean of references, from where the information
has been taken. To the best of my knowledge, my minor project is not the same as those
which may have already been submitted for the degree of any university or board.
ANUSHKA MISHRA
2022-2025
3
CONTENTS
S. PAGE
PARTICULARS REMARKS
No. NO.
1. Acknowledgement 2
2. Declaration 3
4. Executive Summary 6
7. Review of Literature 18
8. Company Profile 22
9. Research Methodology 34
13. Conclusion 67
14. Bibliography 70
15. Appendices 72
4
List of Tables and Figures
S. No. Figures Page No.
1. Figure 1 7
2. Figure 2 15
3. Figure 3 16
4. Figure 4 18
5. Figure 5 21
6. Figure 6 22
7. Figure 7 23
8. Figure 8 24
9. Figure 9 33
10. Figure 10 34
11. Figure 11 42
12. Figure 12 45
13. Table 1, figure 13 46
14. Table 2, figure 14 47
15. Table 3, figure 15 48
16. Table 4, figure 16 49
17. Table 5, figure 17 50
18. Table 6, figure 18 51
19. Table 7, figure 19 52
20. Table 8, figure 20 53
21. Table 9, figure 21 54
22. Table 10, figure 22 55
23. Figure 23 61
24. Table 11 66
25. Figure 24 67
26. Figure 25 70
5
EXECUTIVE SUMMARY
This project is based on an in-depth comparison between AMD and NVIDIA, two GPU
Manufacturers making significant contributions to the field of artificial intelligence (AI).
This study also focuses on the comparison of financial statuses of the two companies
and conclude that which one did better in 2023 ending or 2024 beginning.
The objective is to present an analysis of the impact of artificial intelligence (AI) in today's
world, focusing on its various applications across different sectors and the implications
of its widespread adoption.
The study also contains a brief SWOT Analysis of both the companies to find out the
future opportunities and threats and plan accordingly
6
CHAPTER – 1
INTRODUCTION
TO THE TOPIC
Fig. 1
7
WHAT IS ARTIFICIAL INTELLIGENCE
Artificial Intelligence (AI) is a science and a set of computational technologies that are
inspired by—but typically operate quite differently from—the ways people use their
nervous systems and bodies to sense, learn, reason, and act. While the rate of progress
in AI has been patchy and unpredictable, there have been significant advances since the
field's inception sixty years ago. Once a mostly academic area of study, twenty-first
century AI enables a constellation of mainstream technologies that are having a
substantial impact on everyday lives. Computer vision and AI planning, for example,
drive the video games that are now a bigger entertainment industry than Hollywood.
Deep learning, a form of machine learning based on layered representations of variables
referred to as neural networks, has made speech-understanding practical on our phones
and in our kitchens, and its algorithms can be applied widely to an array of applications
that rely on pattern recognition. Natural Language Processing (NLP) and knowledge
representation and reasoning have enabled a machine to beat the Jeopardy champion
and are bringing new power to Web searches.
8
personalized learning experiences, and advancing research through data analysis and
predictive modelling. Overall, the significance of AI lies in its ability to drive efficiency,
innovation, and transformation across various sectors, ultimately shaping the future of
society.
3. Early AI Programs (1950s-1960s): In the late 1950s and early 1960s, researchers
developed early AI programs, including the Logic Theorist by Allen Newell and
Herbert A. Simon, and the General Problem Solver (GPS) by Newell and Simon.
9
These programs demonstrated the feasibility of using computers to solve problems
and reason symbolically.
10
Throughout its history, AI research has been characterized by periods of excitement,
followed by periods of scepticism, but overall, it has continued to advance steadily, with
increasingly sophisticated algorithms and applications.
TYPES OF AI
From chatbots to super-robots, here’s the types of AI to know and where the tech's
headed next.
I. Capability-Based types:
1) Narrow AI - Narrow AI, also known as Artificial Narrow Intelligence (ANI) or weak
AI, describes AI tools designed to carry out very specific actions or commands.
ANI technologies are built to serve and excel in one cognitive capability and
cannot independently learn skills beyond its design. They often utilize machine
learning and neural network algorithms to complete these specified tasks. For
instance, natural language processing is a type of narrow AI because it can
recognize and respond to voice commands but cannot perform other tasks
beyond that. Some examples of narrow AI include image recognition software,
self-driving cars and AI virtual assistants.
11
3) Artificial Superintelligence - Artificial superintelligence (ASI), or super AI, is the
stuff of science fiction. It’s theorized that once AI has reached the general
intelligence level, it will soon learn at such a fast rate that its knowledge and
capabilities will become stronger than that even of humankind. ASI would act as
the backbone technology of completely self-aware AI and other individualistic
robots. Its concept is also what fuels the popular media trope of “AI takeovers.”
But at this point, it’s all speculation.
2) Limited Memory AI - Limited memory AI can store past data and use that data to
make predictions. This means it actively builds its own limited, short-term
knowledge base and performs tasks based on that knowledge. The core of limited
memory AI is deep learning, which imitates the function of neurons in the human
brain. This allows a machine to absorb data from experiences and “learn” from
them, helping it improve the accuracy of its actions over time. Today, the limited
memory model represents the majority of AI applications. It can be applied in a
broad range of scenarios, from smaller scale applications, such as chatbots, to
self-driving cars and other advanced use cases.
3) Theory of Mind AI - Theory of mind refers to the concept of AI that can perceive
and pick up on the emotions of others. The term is borrowed from psychology,
12
describing humans’ ability to read the emotions of others and predict future
actions based on that information. Theory of mind hasn’t been fully realized yet
and stands as the next substantial milestone in AI’s development. Theory of mind
could bring plenty of positive changes to the tech world, but it also poses its own
risks. Since emotional cues are so nuanced, it would take a long time for AI
machines to perfect reading them and could potentially make big errors while in
the learning stage. Some people also fear that once technologies are able to
respond to emotional signals as well as situational ones, the result could mean
automation of some jobs.
13
OVERVIEW OF THE COMPANIES
BASIS Advanced Micro Devices NVIDIA
14
focus on both consumer and
enterprise markets.
Future Plans Continues to focus on NVIDIA continues to invest
innovation in CPU and GPU heavily in AI research and
technologies, including development, with a focus on
advancements in high- advancing GPU technology
performance computing and AI. for deep learning,
AMD's acquisition of Xilinx autonomous vehicles,
signals its intent to expand into robotics, and other AI
FPGA (Field-Programmable applications.
Gate Array) technology.
Competitions Intel: Main competitor in the AMD: Main competitor in
CPU market. the GPU market.
NVIDIA: Main competitor in Intel: Emerging competitor
the GPU market. in the discrete GPU
Other semiconductor market.
companies such as Other semiconductor
Qualcomm, IBM, and ARM. companies focusing on AI,
data centers, and
automotive technologies.
fig. 2
15
CHAPTER – 2
Fig. 3
16
OBJECTIVES
2) To have an overview of the two competing GPU manufacturers: AMD and NVIDIA
17
CHAPTER – 3
REVIEW OF LITERATURE
Fig. 4
18
I. Introduction
In the 21st century artificial intelligence (AI) has become an important area of research
in virtually all fields: engineering, science, education, medicine, business, accounting,
finance, marketing, economics, stock market and law, among others (Halal (2003),
Masnikosa (1998), Metaxiotis et al. (2003), Raynor (2000), Stefanuk and Zhozhikashvili
(2002), Tay and Ho (1992) and Wongpinunwatana et al. (2000)). The field of AI has grown
enormously to the extent that tracking proliferation of studies becomes a difficult task
(Ambite and Knoblock (2001), Balazinski et al. (2002), Cristani (1999) and Goyache
(2003)). Apart from the application of AI to the fields mentioned above, studies have been
segregated into many areas with each of these springing up as individual fields of
knowledge (Eiter et al. (2003), Finkelstein et al. (2003), Grunwald and Halpern (2003),
Guestrin et al. (2003), Lin (2003), Stone et al. (2003) and Wilkins et al. (2003)).
19
in AI so they can continue in their efforts aimed at developing this area of concentration
through newly generated ideas. Consequently, they would be able to push forward the
frontier of knowledge in AI. In the section that follows this paper presents a brief
explanation of some important areas in Artificial Intelligence. This is to introduce the
readers into the wide-ranging topics that AI encompasses.
These descriptions only account for a selected number of areas:
3. Expert system – An expert system is computer software that can solve a narrowly
defined set of problems using information and reasoning techniques normally
associated with a human expert. It could also be viewed as a computer system that
performs at or near the level of a human expert in a particular field of endeavour.
20
Fig 5: relationship among diverse fields of AI
21
CHAPTER – 4
COMPANY PROFILE
Fig. 6
22
4.1 Advanced Micro Devices (AMD)
California
CEO: Lisa Su
Type: Public
Key People: Lisa Su (chair & CEO), Victor Pen (president), John Edward
23
4.2 NVIDIA
America
Website: www.nvidia.com
Type: Public
Key people: Jensen Huang (CEO and President), Bill Dally (Chief
Scientist)
24
4.3 Advanced Micro Devices, Inc. (AMD)
4.3.1 OVERVIEW:
Advanced Micro Devices, Inc. (AMD) is a large American company from Sunnyvale,
California that makes computer hardware components. It makes many different
computer parts, but it is most famous for its central processing units (CPUs) and graphics
processing units (GPUs). Another important product are their motherboard chipsets for
their CPUs. AMD started as a company that made products for Intel, another large
hardware company, and competitor of AMD. In 2006, AMD bought ATI Technologies with
$4.3 billion of cash and 52 million AMD stocks. In 2020, AMD said that they are buying
Xilinx, a company that makes circuits that can be changed using computer code (FPGA).
Ryzen is AMD's brand name for their CPUs for normal use. Ryzen CPUs have between
2 and 64 cores and can achieve speeds above 5 gigahertz (GHz). Radeon is their brand
name for other computer products like GPUs and computer parts made by other
companies that AMD put their brand on (OEM).
4.3.2 HISTORY:
1969 – AMD was founded in 1969 by Walter Jeremiah (“Jerry”) Sanders, a former
executive at Fairchild Semiconductor Corporation, and seven others.
1970 – The company released its first product and went public two years later. In the
mid-1970s the company began producing computer chips. Starting out as a second-
source manufacturer of computer chips, the company placed a great emphasis on quality
and grew steadily.
1982 – The company began supplying second-source chips for Intel Corporation, which
made the microprocessor used in IBM personal computers (PCs).
1986 – The agreement with Intel ended in 1986.
1991 – AMD released the Am386 microprocessor family, a reverse-engineered chip that
was compatible with Intel’s next-generation 32-bit 386 microprocessor.
25
1994 – There ensued a long legal battle that was finally decided in a 1994 U.S. Supreme
Court ruling in AMD’s favour. That same year, Compaq Computer Corporation contracted
with AMD to produce Intel-compatible chips for their computers.
1996 – AMD acquired a microprocessor company known as NexGen and began
branching out from the Intel-compatible chip market.
2000 – AMD introduced the Athlon processor, which was designed to run the Microsoft
Corporation’s Windows operating system. With the release of the Athlon processor, AMD
became the first company to produce a 1-GHz (gigahertz) microprocessor, which marked
AMD as a serious competitor in the chip market.
2003 – the company released the Opteron chip, another product that showcased the
company’s ability to produce high-end chips.
2006 – AMD absorbed ATI Technologies, a manufacturer of video graphics cards for use
in PCs.
2008 – AMD announced plans to split the company in two—with one part designing
microprocessors and the other manufacturing them. This announcement followed news
that the Advanced Technology Investment Company and the Mubadala Development
Company, would acquire a controlling interest in AMD, pending approval by shareholders
and the U.S. and German governments.
2009 – The European Commission fined rival Intel a record €1.06 billion (£948 million;
$1.45 billion) for engaging in anticompetitive practices that violated the European Union’s
antitrust laws. These practices allegedly involved financially compensating and providing
rebates to manufacturers and retailers who favoured its computer chips over those of
AMD, as well as paying manufacturers to cancel or postpone the launching of products
utilizing AMD’s chips.
2014 – The company was restructured into two parts: computing and graphics, which
made processors for personal computers, and enterprise, embedded, and semi-custom,
which made more-specialized processors.
26
4.3.3 PRODUCTS:
1. Ryzen Processors: Known for their high performance and efficiency, catering to
desktop, laptop, and server markets.
2. Radeon Graphics Cards: Offering competitive graphics performance for gaming and
professional applications.
3. EPYC Server Processors: Designed for data centers and enterprise computing,
offering high core counts and performance for server workloads.
4. Ryzen Threadripper CPUs: Targeted at high-end desktop users and content creators,
offering exceptional multi-threaded performance.
7. A-Series APUs: Combining CPU and GPU cores on a single chip, targeting
mainstream desktop and laptop markets.
8. Radeon Pro Graphics: Optimized for professional workflows such as content creation,
engineering, and scientific computing.
MD has solidified its position as a key player in the semiconductor industry, particularly
in the CPU and GPU markets. With its Ryzen processors, AMD has successfully
challenged Intel's dominance, offering competitive performance and value across
various segments. In the GPU market, while Nvidia maintains a strong presence, AMD's
27
Radeon graphics cards have provided viable alternatives, especially in the mid-range
and budget segments. Additionally, AMD's efforts in the data center and AI markets with
products like EPYC server processors and Radeon Instinct accelerators have shown
promise, although competition remains fierce. Through strategic partnerships and
innovative product offerings, AMD continues to expand its market presence, although it
faces ongoing challenges from industry rivals.
Financial Position:
4.4.1 OVERVIEW:
In 2023 it was said to be the world’s most valuable chipmaker. Demand for its artificial
intelligence (AI) chips more than doubled its income in 2023. Its stock market value
jumped to more than $1 trillion.
28
4.4.2 HISTORY:
NVIDIA Corporation, founded in 1993 by Jensen Huang, Chris Malachowsky, and Curtis
Priem, has a rich history marked by significant technological innovations and industry
milestones:
29
In 2016, NVIDIA launched the GeForce GTX 10 series GPUs based on the Pascal
architecture, delivering substantial performance gains for gaming and VR
applications.
Throughout its history, NVIDIA has remained at the forefront of GPU technology, driving
advancements in gaming, professional visualization, AI, and autonomous systems. Its
commitment to innovation and strategic partnerships has solidified its position as a
leading semiconductor company with a global impact.
30
4.4.3 PRODUCTS AND SOLUTIONS:
NVIDIA offers a diverse range of products and solutions across multiple industries,
leveraging its expertise in graphics processing units (GPUs), artificial intelligence (AI),
and high-performance computing (HPC):
31
3. AI and Deep Learning Platforms:
NVIDIA GPU Cloud (NGC): Cloud-based platform that provides GPU-optimized
containers, software frameworks, and pre-trained models for accelerating AI and
HPC workloads in the cloud.
NVIDIA CUDA: Parallel computing platform and programming model that enables
developers to harness the power of NVIDIA GPUs for accelerating scientific
simulations, data analytics, and AI applications.
NVIDIA Jarvis: AI-powered conversational AI platform for building virtual
assistants, chatbots, and natural language processing applications with advanced
speech recognition and language understanding capabilities.
5. Gaming Technologies:
NVIDIA GeForce NOW: Cloud gaming service that allows users to stream PC
games to various devices, providing access to a vast library of games without the
need for high-end gaming hardware.
NVIDIA Reflex: Technology designed to reduce system latency in competitive
gaming, providing faster response times for improved gameplay and competitive
advantage.
Nvidia remains a dominant force in the semiconductor industry, particularly in the GPU
market. With its GeForce lineup, Nvidia has long been the preferred choice for high-
32
performance gaming and graphics processing, maintaining a strong foothold in both
consumer and professional markets. Additionally, Nvidia's Tesla GPUs have established
themselves as a go-to solution for data centers, AI, and high-performance computing
tasks, further solidifying Nvidia's position in these lucrative segments. Nvidia's CUDA
platform has become a standard for parallel computing, further enhancing its ecosystem
and market influence. Despite facing competition from rivals like AMD and Intel, Nvidia's
relentless innovation, strategic partnerships, and diversified product portfolio continue to
sustain its leading market position.
Financial position:
4th quarter revenue – $22.1 billion, up 22% from Q3, up 265% from 2023
4th quarter gross margin was 72.7%
4th quarter EPS has increased over the past week from 5.1 to 5.13 (0.59%)
Full year revenue was $60.92 billion for Jan 31, 2024
33
CHAPTER – 5
RESEARCH METHODOLOGY
Fig. 10
34
Research Methodology chapter here describes research methods, approaches and
designs in detail highlighting those used throughout the study, justifying my choice
through describing advantages and disadvantages of each approach and design
considering their practical applicability to our research.
Research can be defined as “an activity that involves finding out, in a more or less
systematic way, things you did not know” (Walliman and Walliman, 2011, p.7).
“Methodology is the philosophical framework within which the research is conducted or
the foundation upon which the research is based” (Brown, 2006). O’Leary (2004, p.85)
describes methodology as the framework which is associated with a particular set of
paradigmatic assumptions that we will use to conduct our research. Allan and Randy
(2005) insist that when conducting a research methodology should meet certain criteria.
Once our sample is selected, we need a plan for asking questions and recording
answers. The most common types of surveys are questionnaires and interviews. A
questionnaire is series of written statements or questions.
35
With an interview, the researcher personally asks subjects a series of questions and
gives participants the freedom to respond as they wish. Both questionnaires and
interviews can include open-ended questions (allowing the subjects to respond freely),
or close-ended questions (including a selection of fixed responses).
The current project report has been prepared using both primary as well as secondary
data sources.
Primary Research
Technically, they “own” the data. Primary research is solely carried out to address a
certain problem, which requires in-depth analysis.
Here are some of the primary research methods organizations or businesses use to
collect data:
36
personal approach. However, the success of face-to-face interview depends heavily
on researcher’s ability to ask questions and his/her experience related to conducting
such interviews in the past. The types of questions that are used in this type of
research are mostly open-ended questions. These questions help to gain in-depth
insights into opinions and perceptions of respondents.
2. Online surveys: Once conducted with pen and paper, surveys have come a long way
since then. Today, most researchers use online surveys to send it to respondents to
gather information from them. Online surveys are convenient and can be sent on
emails or can be filled out online. These can be accessed on handheld devices like
smartphone, tablets, iPads and similar devices. Once a survey is deployed, a certain
amount of stipulated time is given to respondents to answer survey questions and
send it back to researcher. In order to get maximum information from respondents,
surveys should have a good mix or open-ended questions and close ended
questions. Survey should not be lengthy; else respondents lose interest and tend to
leave it half done.
3. Focus groups: This popular research technique is used to collect data from a small
group of people, usually restricted to 6-10. Focus group brings together people who
are experts in subject matter, for which research is being conducted. Focus group
has a moderator who stimulates discussions among the members to get greater
insights. Organizations and businesses can make use of this method especially to
identify niche market to learn about a specific group of consumers.
37
a bakery brand wants to know how people react its new biscuits, observer notes the
first reaction of consumers and evaluates collective data to draw inference.
Data collected is first hand and is accurate. In other words, there is no dilution of
data.
Primary research focuses mainly on problem in hand, which means entire attention
is directed to find probable solution to a pinpointed subject matter.
Data collected can be controlled. Primary research gives a means to control how
data is collected and used.
Primary research is a time-tested method; therefore, one can rely on the results that
are obtained from conducting this type of research.
One of the major disadvantages of primary research is, it can be quite expensive to
conduct. One may be required to spend a huge sum of money depending on the
setup or primary research method used. Not all businesses or organizations may be
able to spend a considerable amount of money.
This type of research can be time-consuming. Conducting interviews, sending and
receiving online surveys can be quite an exhaustive process and need investing time
and patience for the process to work. Moreover, evaluating results and applying the
findings to improve product or service will need additional time.
Sometimes just using one primary research method may not be enough. In such
cases, use of more than one method is required and this might increase both times
required to conduct research and the cost associated with it.
For gathering the primary data, the target population has been identified as the clients
dealing with the organization in different departments.
38
Sampling technique: Simple random sampling with judgement sampling
100 clients were taken into the sample based upon the judgement of the researcher and
these clients were randomly selected basis their interaction with the organization.
Data Collection: The primary research has been based on data collected from the
identified respondents using a self-structured questionnaire. The questionnaire contains
statements regarding the details of the respondents and their opinion on the impact of
the uses of AI in modern world.
Secondary Research
Secondary research or desk research is a research method that involves using already
existing data. Existing data is summarized and collated to increase the overall
effectiveness of research. Secondary research includes research material published in
research reports and similar documents. These documents can be made available by
public libraries, websites, data obtained from already filled in surveys etc. Some
government and non-government agencies also store data that can be used for research
purposes and can be retrieved from them.
Secondary research is much more cost-effective than primary research, as it makes use
of already existing data, unlike primary research where data is collected first hand by
organizations or businesses, or they can employ a third party to collect data on their
behalf.
Secondary research is cost effective and that’s one of the reasons that makes it a
popular choice among a lot of businesses and organizations. Not every organization is
able to pay huge sum of money to conduct research and gather data. So, rightly
secondary research is also termed as “desk research”, as data can be retrieved from
sitting behind a desk.
39
Following are popularly used secondary research methods and examples:
1. Data available on the internet: One of the most popular ways of collecting secondary
data is using the internet. Data is readily available on the internet and can be
downloaded at the click of a button. This data is practically free of cost, or one may
have to pay a negligible amount to download the already existing data. Websites
have a lot of information that businesses or organizations can use to suit their
research needs. However, organizations need to consider only authentic and trusted
website to collect information.
2. Govt and non-govt agencies: Data for secondary research can also be collected from
some government and nongovernment agencies. For example, US Government
Printing Office, US Census Bureau, and Small Business Development Centers have
valuable and relevant data that businesses or organizations can use. There is a
certain cost applicable to download or use data available with these agencies. Data
obtained from these agencies are authentic and trustworthy.
3. Public libraries: Public libraries are another good source to search for data for
secondary research. Public libraries have copies of important research that were
conducted earlier. They are a storehouse of important information and documents
from which information can be extracted. The services provided in these public
libraries vary from one library to another. More often, libraries have a huge collection
of government publications with market statistics, large collection of business
directories and newsletters.
40
5. Commercial information sources: Local newspapers, journals, magazines, radio and
TV stations are a great source to obtain data for secondary research. These
commercial information sources have firsthand information on economic
developments, political agenda, market research, demographic segmentation and
similar subjects. Businesses or organizations can request to obtain data that is most
relevant to their study. Businesses not only have the opportunity to identify their
prospective clients but can also know about the avenues to promote their products
or services through these sources as they have a wider reach.
This project has considered data available from all the above given sources of secondary
data.
41
CHAPTER – 6
Fig. 11
42
Primary research, while invaluable for generating firsthand data and insights, also
comes with its own set of limitations:
1. Cost and Time: Conducting primary research can be expensive and time-consuming.
It involves various expenses such as participant recruitment, data collection tools,
incentives, and researcher time. Depending on the scope and scale of the research,
the costs and time required can be substantial.
3. Sampling Bias: There's a risk of sampling bias, where the sample chosen for the
research may not be representative of the larger population. This can occur due to
various factors such as sampling methods, non-response bias, or self-selection bias.
5. Validity and Reliability: Ensuring the validity and reliability of primary research
findings can be challenging. Factors such as researcher bias, respondent bias,
measurement error, and sampling error can affect the accuracy and credibility of the
results.
43
includes obtaining informed consent, protecting participants' confidentiality, and
ensuring that the research does not cause harm or discomfort.
44
CHAPTER – 7
Fig. 12
45
ANALYSIS BASED ON QUESTIONNAIRE
Frequency Percentage
Yes 68 68%
No 20 20%
Maybe 12 12%
Table 1: familiarity with AI
In the above table, 68% of the respondents are familiar with Artificial Intelligence
and same has been represented graphically below:
Yes No maybe
12%
20%
68%
Fig. 13
46
Q2. In which areas do you think AI will have the biggest impact?
The analysis shows that out of the total respondents, majority of the
people or clients think that AI will have the biggest impact on Finance sector
due to the increasing risks and frauds in the economy. The same has been
represented below graphically.
100
90
80
70
60
50
40
30
20
10
0
Healthcare Transportation Finance Education Entertainment
Responses
Fig. 14
47
Q3. Which of the following AI technologies have you used?
It is evident from the table that the most popular AI technology being used is
Virtual Assistants like Siri (apple), Alexa (Amazon), google assistant, Bixby, etc.
for purposes like customer service, data entry, reducing costs, etc.
90
80
70
60
50
40
30
20
10
0
machine NLP computer vision Category 4 chatbots virtual
learning assistants
Technologies
Fig. 15
48
Q4. Do you think AI can improve the learning experience for students?
Frequency Percentage
Yes 60 60%
No 12 12%
Maybe 28 28%
The table given above shows that 60% of people or respondents agree that the
AI can improve learning experience for students since it can improve the students’
overall performance and boost their motivation. Same has been shown graphically
below:
Yes No maybe
0%
28%
60%
12%
Fig. 16
49
Q5. Which of the following areas can AI help with in benefitting the students? (select all
that apply)
It is clear from the table given above that Personalized Learning (90%) is the
most chosen area that AI can help with for benefitting the students, followed by
Tutor Support (80%) and Automated Grading (75%).
100
90
80
70
60
50
40
30
20
10
0
Personalized Automated Grading Tutoring Support Career Guidance Educational
Learning Content Generation
Areas
Fig. 17
50
Q6. Would you consider pursuing a career in AI or related fields?
Frequency Percentage
Yes 62 62%
No 23 23%
Maybe 15 15%
The given table above indicates the agreement of 62% of the respondents
would consider pursuing a career in AI or related fields.
Yes No Maybe
15%
23%
62%
Fig. 18
51
Q7. Do you think AI will replace human jobs in the future?
Frequency Percentage
Yes 70 70%
No 16 16%
Maybe 14 14%
The table above shows that out of the total respondents, 70% of the
respondents agree that AI will replace human jobs in future since it can perform
repetitive tasks and those which are currently difficult and impossible for humans
to do.
Yes No Maybe
14%
16%
70%
Fig. 19
52
Q8. How did you come to know about AMD and NVIDIA?
Responses Frequency
Newspaper/magazine 20
Digital/social media 50
Friends/Word of 25
mouth
Tele calling 5
Table 8: Awareness about the companies
The table given above has been interpreted to highlight the marketing elements which
have helped the most in creative awareness about the company.
Fig. 20
5%
20%
25%
50%
53
Q9. Are you following AMD and NVIDIA on social media platforms?
According to the respondents, most of the responses (60%) are following AMD
and NVIDIA on social media platforms highlighting the popularity and need of
this medium for brand awareness of Graphic cards.
Yes No
40%
60%
Fig. 21
54
Q10. Would like to recommend AMD and NVIDIA to friends and family?
Yes 85 85%
No 15 15%
Yes Yes
15%
85%
Fig. 22
55
FINDINGS BASED ON THE STUDY
Artificial intelligence (AI) has had a significant impact on the gaming industry in recent
years, with many games now incorporating AI to enhance gameplay and make it more
immersive for players.
For example, NPC characters might have their own goals and motivations that they
pursue, or they might react differently to different player actions. This can make the game
feel more alive and believable, as players feel like they are interacting with real
characters rather than just programmed entities.
AI is also being used in game design to create more dynamic and interesting levels and
content. This can help developers create more diverse and engaging games with less
effort. For example, AI might be used to design game levels that are procedurally
generated, meaning that they are created on the fly as the player progresses through
the game. This can help keep the game fresh and interesting for players, as they are not
simply playing through the same levels over and over again.
56
7.2 AI IN GRAPHICS CARDS
Artificial intelligence (AI) has significantly impacted graphic cards and their functionalities
in recent years, particularly in enhancing performance, efficiency, and user experience.
One of the primary roles of AI in graphic cards is through technologies like NVIDIA's
Tensor Cores or AMD's AI-accelerated features. These specialized hardware units are
designed to accelerate AI workloads, including deep learning inference and training
tasks. In gaming, AI-based technologies like NVIDIA's DLSS (Deep Learning Super
Sampling) utilize neural networks to upscale lower-resolution images in real-time,
resulting in higher-quality visuals while maintaining smooth frame rates.
The new NVIDIA RTX GPU and AMD CPU-powered AI workstations provide the power
and performance required for training such smaller models, as well as local fine-tuning,
and helping to offload data center and cloud resources for AI development tasks. The
devices let users select single- or multi-GPU configurations as required for their
workloads.
Smaller trained AI models also provide the opportunity to use workstations for local
inferencing. RTX GPU and AMD CPU-powered workstations can be configured to run
these smaller AI models for inference serving for small workgroups or departments.
With up to 48GB of memory in a single NVIDIA RTX GPU, these workstations offer a
cost-effective way to reduce compute load on data centers. And when professionals do
need to scale training and deployment from these workstations to data centers or the
cloud, the NVIDIA AI Enterprise software platform enables seamless portability of
workflows and toolchains.
RTX GPU and AMD CPU-powered workstations also enable cutting-edge visual
workflows. With accelerated computing power, the new workstations enable highly
interactive content creation, industrial digitalization, and advanced simulation and
design.
57
7.3 EMERGING TRENDS IN AI
58
7.4 FUTURE PREDICTIONS FOR AI
Two of the hottest topics in AI today are agents and artificial general intelligence (AGI).
Agents are AI systems that can complete loosely defined tasks: say, planning and
booking your upcoming trip. AGI refers to an artificial intelligence system that meets or
exceeds human capabilities on every dimension.
When people envision the state of AI in 2030, agents and/or AGI are often front and
center.
Yet we predict that these two terms won’t even be widely used by 2030. Because they
will cease to be relevant as independent concepts.
By 2030, AI will be unfathomably more powerful than humans in ways that will transform
our world. It will also continue to lag human capabilities in other ways. If an artificial
intelligence can understand and explain every detail of human biology down to the
atomic level, who cares if it is “general” in the sense of matching human capabilities
across the board?
Tom Cruise’s Cruise Oblivion: Age of Tomorrow is a movie where you can find machines
acting and thinking like humans. AI applications work faster, with greater operation
efficiency and accuracy, and with better decision-making than humans. This says that
artificial intelligence achievements closely mimic human intelligence in the sense of
understanding, reasoning, and learning. However, there is cause and effect with these
innovations, and the significant advancements outperform humans in specific tasks,
which challenges the scope of human intelligence. It symbolises that the future of
developing artificial intelligence requires experts, and it creates various career
opportunities.
59
Now the bigger question arises. Will AI replace humans in future?
No, AI will not replace human intelligence, as it is humans who are developing AI
applications through programming and algorithms. Automation makes it easier to
replace manual labour, and today, in every sector, these AI technologies are making it
easier to complete complex tasks. There are certain reasons to prove it:
60
CHAPTER – 8
SWOT ANALYSIS
Fig. 23
61
8.1 AMD (Advanced Micro Devices)
STRENGTHS:
1. Product Portfolio: AMD offers a diverse range of products including CPUs, GPUs,
and semi-custom chips for gaming consoles, which diversifies its revenue streams
and reduces dependency on any one product line.
2. Technological Innovation: AMD has made significant strides in CPU and GPU
technologies, especially with its Ryzen and Radeon product lines, offering
competitive performance and efficiency compared to its competitors.
3. Partnerships and Alliances: Collaborations with companies like Microsoft and Sony
for providing chips for their gaming consoles have strengthened AMD’s market
position.
4. Strong Financial Performance: AMD has shown consistent revenue growth and
improved profitability in recent years, indicating its ability to effectively compete in the
market.
WEAKNESSES:
62
OPPORTUNITIES:
1. Growth in Data Centers: With increasing demand for data centers and cloud
computing, there is an opportunity for AMD to expand its presence in the server CPU
market, where it competes with Intel.
2. Emerging Technologies: Advancements in technologies such as artificial intelligence,
machine learning, and autonomous vehicles present opportunities for AMD to
develop specialized chips and solutions tailored to these markets.
3. Strategic Partnerships: Forming strategic alliances with companies in emerging
markets or industries can help AMD expand its market reach and enhance its product
offerings.
4. Expansion in Emerging Markets: There is potential for AMD to further penetrate
emerging markets where demand for computing and graphics solutions is growing
rapidly.
THREATS:
1. Competition: Intense competition from established players like Intel and Nvidia, as
well as emerging competitors, could potentially erode AMD’s market share and
profitability.
2. Market Saturation: The PC market, which is a significant revenue source for AMD,
may become saturated or experience stagnant growth, limiting the company’s
potential for expansion.
3. Supply Chain Disruptions: Any disruptions in the supply chain, such as shortages of
key components or geopolitical tensions affecting manufacturing, could impact
AMD’s ability to meet demand and deliver products to customers.
4. Technological Shifts: Rapid technological advancements and shifts in consumer
preferences could render AMD’s existing products obsolete or less competitive,
requiring continuous innovation and adaptation.
63
8.2 NVIDIA
STRENGTHS:
1. Market Leadership: Nvidia is a market leader in the GPU segment, with its GeForce
products dominating the gaming market and its Tesla GPUs being widely used in
data centers and AI applications.
2. Technological Superiority: Nvidia’s GPUs are known for their performance, efficiency,
and advanced features like ray tracing and AI acceleration, giving the company a
competitive edge in various markets.
3. Diversification: Nvidia has successfully diversified its business beyond gaming into
areas such as data centers, automotive, and professional visualization, reducing its
reliance on any single market segment.
4. Strong Financial Performance: Nvidia has consistently delivered strong financial
results, with robust revenue growth and profitability, providing resources for
investments in R&D and strategic initiatives.
WEAKNESSES:
1. Dependency on GPU Market: While Nvidia has diversified its business, it remains
heavily reliant on the GPU market, exposing the company to risks associated with
fluctuations in demand or competitive pressures.
2. High R&D Costs: Developing cutting-edge GPU technologies requires substantial
investments in research and development, which could impact Nvidia’s profitability if
these investments do not yield expected results.
3. Regulatory Challenges: Nvidia’s business operations are subject to various
regulations and compliance requirements, which could pose challenges or
constraints on its activities in certain regions or markets.
64
OPPORTUNITIES:
1. Data Center Growth: Continued growth in demand for data center and AI-related
services presents opportunities for Nvidia to expand its data center GPU business
and develop specialized solutions for AI workloads.
2. Autonomous Vehicles: The adoption of autonomous vehicles and advanced driver-
assistance systems (ADAS) creates opportunities for Nvidia to provide GPU
solutions for automotive applications, leveraging its expertise in AI and computer
vision.
3. AI and Edge Computing: The proliferation of AI and edge computing technologies
offers opportunities for Nvidia to develop GPUs optimized for edge devices and IoT
applications, catering to the growing demand for AI inference at the network edge.
4. Gaming Market Expansion: Nvidia can further capitalize on the growing gaming
market by introducing new products and services targeted at different segments of
gamers, including casual gamers, esports enthusiasts, and VR users.
THREATS:
1. Competition: Nvidia faces intense competition from rivals like AMD, Intel, and
emerging players in various markets, which could impact its market share, pricing
power, and profitability.
2. Technological Disruption: Rapid advancements in technology or shifts in consumer
preferences could render Nvidia’s products obsolete or less competitive,
necessitating continuous innovation and adaptation.
3. Supply Chain Risks: Disruptions in the global supply chain, such as shortages of
critical components or geopolitical tensions affecting manufacturing, could impact
Nvidia’s ability to meet demand and deliver products to customers.
4. Regulatory Risks: Regulatory changes or legal challenges related to antitrust,
intellectual property, or data privacy could adversely affect Nvidia’s business
operations and financial performance, especially in highly regulated markets.
65
SWOT TABLE SUMMARISED
66
CHAPTER – 9
CONCLUSION
Fig. 24
67
This chapter will present the conclusions drawn from my study.
Advanced Micro Devices (AMD) and Nvidia are two semiconductor giants battling for
supremacy in the high-growth markets of data center, artificial intelligence (AI), and
gaming. As we look ahead to 2024, both companies are well-positioned to benefit from
the exponential rise of AI applications, but their paths to growth will differ. This
comparative analysis will examine the financial metrics, competitive advantages, risks,
and analyst opinions for AMD and Nvidia.
Nvidia is projected to achieve explosive revenue growth in fiscal 2024 of 126% to $58.1
billion, driven by the massive adoption of its AI GPUs in data centers. Analysts expect
another strong year in fiscal 2025 with 91% growth to $116 billion as supply constraints
ease and Nvidia captures more of the expanding AI opportunity.
In contrast, AMD is expected to grow revenue at a more modest 10.3% in 2023 to $25
billion, rebounding from a 3.9% decline in 2022. However, analysts model an
acceleration to 21.9% growth in 2024 to $30.5 billion as AMD's latest EPYC server CPUs
gain share and its data center GPU business ramps up.
While Nvidia's growth is eye-popping, AMD is coming from a large revenue base of $23.6
billion in 2022 compared to Nvidia's $26.9 billion. Still, Nvidia's growth trajectory
positions it to pull far ahead of AMD in total revenue by 2024.
The AI opportunity for both Nvidia and AMD is immense but still nascent. Forecasting
growth trajectories in such a dynamic market is challenging. If enterprise adoption of AI
68
is slower than expected or if hyperscalers shift workloads to in-house chips, it would
impact growth estimates.
Geopolitical tensions around AI and potential export restrictions are another risk factor.
Nvidia also faces risks around its gaming business which can be volatile based on crypto
demand and chip shortages. AMD has exposure to the cyclical PC market.
In conclusion, while both AMD and Nvidia stand to benefit handsomely from the AI
megatrend, Nvidia appears to have the edge based on its current market
leadership, software advantages, and explosive near-term growth trajectory. AMD
should still be able to carve out a profitable position in AI inference and continue share
gains in server CPUs. However, catching up to Nvidia in AI training will be an uphill
battle. Overall, both AMD and Nvidia are at the center of one of the most transformative
technologies of our time, providing a long runway for growth.
Gartner has revealed in a recent report that companies incorporating AI are projected to
have twice the market share and 10 times more efficiency than their competitors in 2024.
In 2024, it's estimated that businesses will interact with their customers more through
AI-powered communication channels than human-led efforts as per a study by IBM.
69
CHAPTER – 10
BIBLIOGRAPHY
Fig. 25
70
https://en.wikipedia.org/wiki/AMD
https://en.wikipedia.org/wiki/Nvidia
www.nasdaq.com
https://finance.yahoo.com/
https://www.forbes.com/
www.techtarget.com
https://pianalytix.com/
71
APPENDIX
(Attached Copy of Questionnaire)
QUESTIONNAIRE
Kindly, fill the details given in the questionnaire required by the intern for preparing his
project.
b. In which areas do you think AI can have the biggest impact? (select all that apply)
()
()
()
()
()
()
c. Which of the following AI technologies have you used? (Select all that apply)
() Machine Learning
() Natural Language Processing (NLP)
() Computer Vision
() Robotics
() Chatbots
() Virtual Assistants
72
e. Which of the following areas do you think AI can benefit students? (Select all that apply)
() Personalized Learning
() Automated Grading
() Tutoring Support
() Career Guidance
() Educational Content Generation
j. Would you like to recommend AMD and NVIDIA to friends and family?
() Yes
() No
73