8 AI Business Trends in 2024, According To Stanford Researchers
8 AI Business Trends in 2024, According To Stanford Researchers
8 AI Business Trends in 2024, According To Stanford Researchers
Table of Contents
AI makes workers more productive, but we are still lacking in regulations, according to new
research. The 2024 AI Index Report, published by the Stanford University Human-Centered
Arti�cial Intelligence institute, has uncovered the top eight AI trends for businesses, including
how the technology still does not best the human brain on every task.
TechRepublic digs into the business implications of these takeaways, with insight from report
co-authors Robi Rahman and Anka Reuel.
Figure A
While AI did surpass human capabilities in image classi�cation, visual reasoning and English
understanding, the result shows there is potential for businesses to utilise AI for tasks where
human staff would actually perform better. Many businesses are already concerned about the
consequences of over-reliance on AI products.
Figure B
Training costs of AI models, 2017 to 2023. Image: AI Index Report 2024/
Stanford University HAI/Epoch, 2023
In October 2023, the Wall Street Journal published that Google, Microsoft and other big tech
players were struggling to monetize their generative AI products due to the massive costs
associated with running them. There is a risk that, if the best technologies become so
expensive that they are solely accessible to large corporations, their advantage over SMBs
could increase disproportionately. This was �agged by the World Economic Forum back in
2018.
However, Rahman highlighted that many of the best AI models are open source and thus
available to businesses of all budgets, so the technology should not widen any gap. He told
TechRepublic: “Open-source and closed-source AI models are growing at the same rate. One of
the largest tech companies, Meta, is open-sourcing all of their models, so people who cannot
afford to train the largest models themselves can just download theirs.”
Figure C
Impact of AI on customer support agent productivity. Image: AI Index
Report 2024/Stanford University HAI/Brynjolfsson et al., 2023
In the case of consultants, the use of GPT-4 bridged the gap between low-skilled and high-
skilled professionals, with the low-skilled group experiencing more of a performance boost
(Figure D). Other research has also indicated how generative AI in particular could act as an
equaliser, as the less experienced, lower skilled workers get more out of it.
Figure D
However, other studies did suggest that “using AI without proper oversight can lead to
diminished performance,” the researchers wrote. For example, there are widespread reports
that hallucinations are prevalent in large language models that perform legal tasks. Other
research has found that we may not reach the full potential of AI-enabled productivity gains for
another decade, as unsatisfactory outputs, complicated guidelines and lack of pro�ciency
continue to hold workers back.
Figure E
AI-related legislation is also increasing in the EU, with 46, 22 and 32 new regulations being
passed in 2021, 2022 and 2023, respectively. In this region, regulations tend to take a more
expansive approach and most often cover science, technology and communications.
It is essential for businesses interested in AI to stay updated on the regulations that impact
them, or they put themselves at risk of heavy non-compliance penalties and reputational
damage. Research published in March 2024 found that only 2% of large companies in the U.K.
and EU were aware of the incoming EU AI Act.
Figure F
Total global private investment in generative AI from 2019 to 2023. Image:
AI Index Report 2024/Stanford University HAI/Quid, 2023
The buildout of generative AI capabilities is likely to meet demand from businesses looking to
adopt it into their processes. In 2023, generative AI was cited in 19.7% of all earnings calls of
Fortune 500 companies, and a McKinsey report revealed that 55% of organisations now use AI,
including generative AI, in at least one business unit or function.
Awareness of generative AI boomed after the launch of ChatGPT on November 30, 2022, and
since then, organisations have been racing to incorporate its capabilities into their products or
services. A recent survey of 300 global businesses conducted by MIT Technology Review
Insights, in partnership with Telstra International, found that respondents expect their number
of functions deploying generative AI to more than double in 2024.
However, there is some evidence that the boom in generative AI “could come to a fairly swift
end”, according to leading AI voice Gary Marcus, and businesses should be wary. This is
primarily due to limitations in current technologies, such as potential for bias, copyright issues
and inaccuracies. According to the Stanford report, the �nite amount of online data available to
train models could exacerbate existing issues, placing a ceiling on improvements and
scalability. It states that AI �rms could run out of high-quality language data by 2026, low-
quality language data in two decades and image data by the late 2030s to mid-2040s.
Figure G
The responsible AI benchmarks used in the development of popular AI
models. Image: AI Index Report 2024/Stanford University HAI
Reuel, a PhD student in the Stanford Intelligent Systems Laboratory, told TechRepublic in an
email: “There are currently no reporting requirements, nor do we have robust evaluations that
would allow us to con�dently say that a model is safe if it passes those evaluations in the �rst
place.”
Without standardisation in this area, the risk that some untrustworthy AI models may slip
through the cracks and be integrated by businesses increases. “Developers might selectively
report benchmarks that positively highlight their model’s performance,” the report added.
Reuel told TechRepublic: “There are multiple reasons why a harmful model can slip through the
cracks. Firstly, no standardised or required evaluations making it hard to compare models and
their (relative) risks, and secondly, no robust evaluations, speci�cally of foundation models, that
allow for a solid, comprehensive understanding of the absolute risk of a model.”
Figure H
Global opinions on the impact AI will have on current jobs in 2023. Image:
AI Index Report 2024/Stanford University HAI/Ipsos, 2023
Other surveys referenced in the AI Index Report found that 53% of Americans currently feel
more concerned about AI than excited, and that the joint most common concern they have is its
impact on jobs. Such worries could have a particular impact on employee mental health when
AI technologies start to be integrated into an organisation, which business leaders should
monitor.
“The dominance of the U.S. in AI continued throughout 2023. Stanford’s AI Index Report
released in 2024 found 61 notable models had been released in the U.S. in 2023; this was
ahead of China’s 15 new models and France, the biggest contributor from Europe with eight
models (Figure I). The U.K. and European Union as a region produced 25 notable models —
beating China for the �rst time since 2019 — while Singapore, with three models, was the only
other producer of notable large language models in APAC.”
Figure I
Methodology
The AI Index Report 2024 “tracks, collates, distills, and visualizes data related to arti�cial
intelligence”. It draws on a combination of data analyses, expert surveys, literature reviews and
qualitative assessments conducted by global researchers to provide insights into the state and
trajectory of AI research.
Share Article
Fiona Jackson
Fiona is an experienced B2B reporter, content writer and editor from Bath, UK. At
TechRepublic, she focuses on business tech news in the UK and EMEA region.
Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest
about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays