Panaversity Cloud Native Applied Generative AI Engineer
Panaversity Cloud Native Applied Generative AI Engineer
Panaversity Cloud Native Applied Generative AI Engineer
Generative AI Engineer
Master the Future
Today's pivotal technological trends are Cloud Native (CN), Generative AI (GenAI),
and Physical AI. Cloud Native technology offers a scalable and dependable platform
for application operation, while AI equips these applications with intelligent, human-
like capabilities. Physical AI aims to bridge the gap between digital intelligence and
physical capability, creating systems that can understand and interact with the world
in a human-like manner. Our aim is to train you to excel as a Cloud Native Applied
Generative and Physical AI developer globally.
The Cloud Native Applied Generative AI Certification program equips you to create
leading-edge Cloud Native AI and Physical AI solutions using a comprehensive
cloud-native, AI, and Physical AI platform.
This twenty one month program equips you with the skills to thrive in the age of
Generative AI (GenAI), Physical AI, and cloud native computing (CN). You will
become an expert Custom GPT, AI Agent, and Humanoid Robotics Developer. The
program is divided into two levels: foundation level and professional level. Students
will be able to start working after completing the foundation level. They will
continue their professional level studies while working.
2
● Develop AI Powered Microservices: Master Python, build APIs using
FastAPI, SQLModel, Postgres, Kafka, Kong, and leverage cutting-edge GenAI
APIs like OpenAI, and Open Source AI LLMs.
● Cloud Native Expertise: Design and deploy cloud-native applications using
Docker, DevContainers, TestContainers, Kubernetes, Terraform, and GitHub
Actions.
● Distributed System Design: Designing systems that run on multiple
computers (or nodes) simultaneously, interacting and coordinating their
actions by passing messages over a network.
● Designing AI Solutions using Design Thinking and Behaviour Driven
Development (BDD): We will learn to leverage these methodologies to create
AI solutions that are not only technically sound but also highly user-centric
and aligned with real-world needs.
● Fine-Tuning Open-Source Large Language Models using PyTorch, and
Fast AI: We will learn to fine-tuning of open-source Large Language Models
(LLMs) like Meta LLaMA 3 using PyTorch and Fast AI, with a focus on cloud-
native training and deployment. We will set up development environments,
preprocess data, fine-tune models, and deploy them using cloud native
platforms.
● Physical AI and Humanoid Robotics: We will learn to design, simulate, and
deploy advanced humanoid robots capable of natural interactions.
Flexible Learning:
● Earn While You Learn: Start freelancing or contributing to projects after the
third quarter.
○ Certification:
■ Certified Professional Python Programmer (CPPP1)
3
With this course, you’ll start by building a strong understanding of generative
AI and learn how to apply Large language models (LLMs) and diffusion
models practically. We will introduce a set of principles known as prompt
engineering, which will help developers to work efficiently with AI. Learn to
create custom AI models and GPTs using OpenAI, Azure, and Google
technologies. Use open source libraries, like Langchain, CrewAI, and
LangGraph to automate repeatable, multi-step tasks and automate business
processes that are typically done by a group of people.
Certifications:
Build scalable AI Powered APIs using FastAPI, Postgres, Kafka, Kong, GenAI
APIs like OpenAI Chat Completion APIs, Assistant APIs, LangChain and
Open Source AI LLMs, develop them using Containers and Dev Containers,
and deploy them using Docker Compose locally and Kubernetes Powered
Serverless Container Services on the cloud.
○ Certifications:
■ PostgreSQL 13 Associate Certification
■ Confluent Certified Developer for Apache Kafka (CCDAK)
■ Design Thinking Professional Certificate (DTPC)
■ Test and Behavior Driven Development (TDD/BDD)
Learning Repo:
https://github.com/panaversity/learn-cloud-native-ai-powered-microservices/
4
Amazon is still the cloud king based on market share. But many analysts
agree: In the battle for the cloud, AI is now a game-changer — and Amazon's
main competitors, particularly Microsoft, have the momentum.
In our program we will be using Azure as our default provider for teaching and
deployment. We will be using using these services:
Azure Container Apps (We will Start from this service using Dapr and Keda)
https://azure.microsoft.com/en-us/products/container-apps
Get started with the free tier: The first 180,000 vCPU per second, 360,000
GiB/s, and 2 million requests each month are free.
Watch: https://www.youtube.com/watch?v=0HwQfsa03K8
Deploy: https://learn.microsoft.com/en-us/azure/container-apps/code-to-cloud-
options
GitHub
https://azure.microsoft.com/en-us/products/github/
Kafka
https://cloudatlas.me/5-different-ways-you-can-run-apache-kafka-on-azure-
973a18925ac7
5
Professional Level (4 Quarters)
Fine-tuning Meta LLaMA 3 with PyTorch forms a significant part of the course.
Students will delve into the architecture of Meta LLaMA 3, learn how to load
pre-trained models, and apply fine-tuning techniques. The course covers
advanced topics such as regularisation and optimization strategies to
enhance model performance. Practical sessions guide students through the
entire fine-tuning process on custom datasets, emphasising best practices
and troubleshooting techniques.
6
A critical aspect of this course is its focus on cloud-native training and
deployment using Nvidia NIM. Furthermore, students learn how to deploy
models using Docker and Kubernetes, set up monitoring and maintenance
tools, and ensure their models are scalable and efficient.
The course culminates in a capstone project, where students apply all the
skills they have learned to fine-tune and deploy Meta LLaMA 3 on a chosen
platform. This project allows students to demonstrate their understanding and
proficiency in the entire process, from data preparation to cloud-native
deployment.
Learning Repo:
https://github.com/panaversity/learn-fine-tuning-llms
7
Master Kubernetes, Terraform, and GitHub Actions to deploy your AI APIs
and microservices in the cloud. We will cover distributed system design
involving creating systems that are distributed across multiple nodes, focusing
on scalability, fault tolerance, consistency, availability, and partition tolerance.
Certifications:
Frontend Specialisation
Learning Repo:
https://github.com/panaverse/learn-nextjs
8
In 2015, Klaus Schwab, founder of the World Economic Forum, asserted that we
were on the brink of a “Fourth Industrial Revolution,” one powered by a fusion of
technologies, such as advanced robotics, artificial intelligence, and the Internet of
Things.
“[This revolution] will fundamentally alter the way we live, work, and relate to one
another,” wrote Schwab in an essay published in Foreign Affairs. “In its scale, scope,
and complexity, the transformation will be unlike anything humankind has
experienced before.”
Investor Cathie Wood predicts that the market for humanoid robots could grow to $1
trillion by 2030.
9
enhances the agility of cloud-native applications, allowing them to operate more
efficiently with fewer resources.
Cloud Native has already been adopted by the majority of the companies, by 2024,
more than 90% of global organisations will be running containerized applications in
production. The adoption of Docker and Kubernetes has seen significant growth over
recent years. As of 2022, about 61% of organisations reported using Kubernetes for
container orchestration. This number has been steadily increasing as more
companies realise the benefits of these technologies for managing containerized
applications  .
This revolution is pivotal for technology and job landscapes, making it essential
knowledge in fast-evolving tech cycles. The rapid emergence of Gen AI-powered
and Physical AI technologies, and the evolving demand for skills necessitate
extensive and timely professional training.
10
● Cloud Native computing is the delivery of computing services—
including servers, storage, databases, networking, software, analytics,
and intelligence—over the Internet (“the cloud”).
According to some sources, the average salary for a Cloud Native Applied
Generative AI developer in the global market is around $150,000 per year.
However, this may vary depending on the experience level, industry, location,
and skills of the developer. For example, a senior Cloud Applied Generative
AI developer with more than five years of experience can earn up to $200,000
per year. A Cloud Applied Generative AI developer working in the financial
services industry can earn more than a developer working in the
entertainment industry. A Cloud Applied Generative AI developer working in
New York City can earn more than a developer working in Dubai. In general,
highly skilled AI developers, especially those specialising in applied
generative AI within cloud environments, tend to earn competitive salaries that
are often above the average for software developers or AI engineers due to
the specialised nature of their skills. Moreover, as generative AI technology
becomes more widely adopted and integrated into various products and
11
services, the demand for Cloud Applied Generative AI developers is likely to
increase.
12
demands business acumen, understanding market needs, networking,
securing funding, managing resources effectively, and navigating legal and
regulatory landscapes.
To sum up, the potential for Cloud Applied Generative AI Developers to start
their own companies is high.
4. Is the program not too long, twenty one months is a long time?
The length of the program is twenty one months which is broken down into
seven quarters of three months each. The program covers a wide range of
topics including Python, GenAI, Microservices, Database, Cloud
Development, Fine-tuning, DevOps, GPTs, AI Agents, and Humanoids. The
program is designed to give students a comprehensive understanding of
generative AI and prepare them for careers in this field. Nothing valuable can
be achieved overnight, there are no shortcuts in life.
13
in libraries they will first come for Python. Python is always a better choice
when dealing with AI and API.
14
Google Gemini Multi-Modal API is a new series of foundational models built
and introduced by Google. It is built with a focus on multimodality from the
ground up. This makes the Gemini models powerful against different
combinations of information types including text, images, audio, and video.
Currently, the API supports images and text. Gemini has proven by reaching
state-of-the-art performance on the benchmarks and even beating the
ChatGPT and the GPT4-Vision models in many of the tests.
There are three different Gemini models based on their size, the Gemini Ultra,
Gemini Pro, and Gemini Nano in decreasing order of their size.
15
● Cloud providers offer a wide range of services that can be used to
support generative AI applications, including storage, computing,
networking, and machine learning.
● Cloud services are typically more cost-effective than on-premises
infrastructure, which can be a significant advantage for generative AI
applications that are often used for large-scale projects.
9. What is the purpose of Docker Containers and what are the benefits of
deploying them with Docker Compose, and Kubernetes?
● Docker Containers are a way to package software into a single unit
that can be run on any machine, regardless of its operating system. It
is used to create a Dockerfile, which is a text file that describes how to
build a Docker image. The image is then used to create a container,
which is a running instance of the image. This makes them ideal for
deploying applications on a variety of platforms, including cloud-based
services.
● Docker Compose is a tool provided by Docker that allows you to
define and manage multi-container Docker applications locally. It
enables you to use a YAML file to configure the services, networks,
and volumes needed for your application's setup. With Docker
Compose, you can describe the services your application requires,
their configurations, dependencies, and how they should interact with
each other, all in a single file. This makes it easier to orchestrate
complex applications locally composed of multiple interconnected
containers.
● Kubernetes is a container orchestration system that automates the
deployment, scaling, and management of containerized applications. It
allows you to run multiple containers on a single machine or across
multiple machines. It is an open source and can be deployed in your
data centre or the cloud.
16
10. What is the purpose of learning to develop APIs in a Generative AI
program?
APIs (Application Programming Interfaces) are used to connect different
software applications and services together. They are the building blocks of
the internet and are essential for the exchange of data between different
systems.
17
By the end of the quarter, students will be able to use Python-based FastAPI
to develop APIs that are fast, scalable, and secure.
13. What are the benefits of using Docker Containers for development,
testing, and deployment?
Docker Containers are a fundamental building block for development, testing,
and deployment because they provide a consistent environment that can be
used across different systems. This eliminates the need to worry about
dependencies or compatibility issues, and it can help to improve the efficiency
of the development process. Additionally, Docker Containers can be used to
isolate applications, which can help to improve security and make it easier to
manage deployments.
14. What is the advantage of using open Docker, Kubernetes, and Terraform
technologies instead of using AWS, Azure, or Google Cloud
technologies?
18
Using open-source technologies like Docker, Kubernetes, and Terraform
offers several advantages over relying solely on proprietary cloud services
from AWS, Azure, or Google Cloud. Here’s a detailed comparison:
2. Cost Efficiency:
- Avoid Vendor Lock-In: Being locked into a single cloud provider can lead
to higher costs over time. Using open technologies allows you to leverage
competitive pricing from multiple providers or even use on-premises
resources.
- Optimised Resource Utilisation: Kubernetes helps in efficiently managing
resources through automated scaling and load balancing, potentially reducing
costs.
19
- Transparency: Access to the source code means you can audit and modify
the software to meet your security and compliance needs.
1. Managed Services:
- **Ease of Use:** Cloud providers offer a wide range of managed services
that abstract away the complexity of setting up and managing infrastructure.
This can save time and reduce operational overhead.
- Integrated Solutions: These platforms provide integrated services and
tools, such as databases, machine learning, analytics, and monitoring, which
can be easily combined to build complex applications.
Conclusion
20
- Cloud Providers: Provide ease of use, managed services, scalability,
security, and access to cutting-edge technology, which can be advantageous
for rapid development, scaling, and leveraging advanced services.
In many cases, a hybrid approach that combines the strengths of both open-
source tools and cloud provider services can provide the best of both worlds,
allowing you to optimise for cost, flexibility, and innovation.
15. Why in this program are we not learning to build LLMs ourselves? How
difficult is it to develop an LLM like ChatGPT 4 or Google’s Gemini?
Developing an LLM like ChatGPT 4 or Google Gemini is extremely difficult
and requires a complex combination of resources, expertise, and
infrastructure. Here's a breakdown of the key challenges:
Technical hurdles:
Additionally:
21
Rapidly evolving field: The LLM landscape is constantly evolving, with new
research, models, and benchmarks emerging. Staying abreast of these
advancements is essential.
To sum up, the focus of the program is not on LLM model development but on
applied Cloud GenAI Engineering (GenEng), application development, and
fine-tuning of foundational models. The program covers a wide range of topics
including Python, GenAI, Microserices, API, Database, Cloud Development,
and DevOps, which will give students a comprehensive understanding of
generative AI and prepare them for careers in this field.
16. Business wise does it make more sense to develop LLMs ourselves
from scratch or use LLMs developed by others and build applications
using these tools by using APIs and/or fine-tuning them?
Whether it makes more business sense to develop LLMs from scratch or
leverage existing ones through APIs and fine-tuning depends on several
factors specific to your situation. Here's a breakdown of the pros and cons to
help you decide:
Pros:
Customization: You can tailor the LLM to your specific needs and data,
potentially achieving higher performance on relevant tasks.
Intellectual property: Owning the LLM allows you to claim intellectual property
rights and potentially monetize it through licensing or other means.
22
Control: You have full control over the training data, algorithms, and biases,
ensuring alignment with your ethical and business values.
Cons:
High cost: Building and training LLMs require significant technical expertise,
computational resources, and data, translating to high financial investment.
Time commitment: Developing an LLM is a time-consuming process,
potentially delaying your go-to-market with your application.
Technical expertise: You need a team of highly skilled AI specialists to
design, train, and maintain the LLM.
Pros:
Cons:
Less customization: Existing LLMs are not specifically designed for your
needs, potentially leading to lower performance on some tasks.
Limited control: You rely on the data and biases of the existing LLM, which
might not align with your specific requirements.
Dependency on external parties: You are dependent on the availability and
maintenance of the LLM by its developers.
23
Ultimately, the best decision depends on your specific needs, resources, and
business goals. Carefully evaluating the pros and cons of each approach will
help you choose the strategy that best aligns with your success.
Here are some examples of what custom GPT models might be used for:
These custom models are created through a process of fine-tuning, where the
base GPT model is further trained (or 'fine-tuned') on a specific dataset. This
process allows the model to become more adept at understanding and
generating text that is relevant to the specific use case. Fine-tuning requires
24
expertise in machine learning and natural language processing, as well as
access to relevant training data.
19. What are AI Agents and how do they differ from Custom GPTs?
AI Agents and Custom GPTs are both tools that utilise artificial intelligence to
perform tasks, but they have distinct functionalities and use cases. Here’s a
breakdown of their differences:
AI Agents
AI Agents are autonomous programs that can perceive their environment,
make decisions, and act upon them to achieve specific goals. They often
interact with other systems or users, continuously learning and adapting
based on their experiences.
Key Characteristics:
1. Autonomy: AI Agents operate independently without continuous human
intervention.
2. Learning: They often employ machine learning algorithms to improve
performance over time.
3. Interactivity: AI Agents can interact with their environment, other systems,
and users.
4. Goal-Oriented: They are designed to achieve specific objectives and can
adapt their actions to optimise towards these goals.
5. Multi-Modal Capabilities: AI Agents can incorporate various forms of AI,
such as computer vision, natural language processing, and decision-making
algorithms.
Examples:
- Robotics: Autonomous robots that navigate and perform tasks.
- Virtual Assistants: Programs like Siri or Alexa that interact with users and
perform tasks based on voice commands.
- Game AI: Non-player characters (NPCs) that adapt and react to player
actions.
Custom GPTs
25
Custom GPTs are tailored instances of OpenAI’s ChatGPT, launched in late
2022. They are designed for specific purposes and enhanced with context.
Each custom GPT can have a unique “personality,” including tone of voice,
language complexity, and responsiveness to specific topics. For example, a
financial institution’s custom GPT could be trained on financial reports and
industry-specific terminology, while a healthcare provider’s version might
focus on medical literature and health policy documents
Key Differences
1. Autonomy:
- AI Agents: Operate autonomously and continuously interact with their
environment.
- Custom GPTs: Typically respond to specific inputs and generate outputs
accordingly, but don’t operate autonomously beyond text generation tasks.
4. Interactivity:
- AI Agents: Can interact with both digital and physical environments.
- Custom GPTs: Primarily interact through text-based inputs and outputs.
In summary, while both AI Agents and Custom GPTs utilise AI, AI Agents are
designed for autonomous, goal-oriented actions in diverse environments, and
Custom GPTs are specialised in generating and understanding human-like
text for specific applications.
20. Do we need to use Design Thinking and BDD for designing custom
GPTs and AI Agents?
Design Thinking and Behavior-Driven Development (BDD) are methodologies
that can greatly enhance the process of designing custom GPTs and AI
Agents, though they are not strictly necessary. Here’s how each can be
beneficial:
Design Thinking
Design Thinking is a user-centred approach to innovation and problem-solving
that involves understanding the user, challenging assumptions, redefining
problems, and creating innovative solutions through iterative prototyping and
testing.
26
Benefits for Custom GPTs and AI Agents:
1. User-Centric Focus: Ensures that the AI solutions are tailored to the actual
needs and pain points of users.
2. Empathy: Helps in understanding the context and environment in which the
AI will be used, leading to more relevant and effective solutions.
3. Iterative Development: Encourages continuous testing and refinement of
ideas, leading to more robust and user-friendly AI models.
4. Collaboration: Promotes cross-disciplinary collaboration, which can bring
diverse perspectives and expertise to the design process.
- BDD:
- Define the expected behaviours of the GPT in natural language scenarios.
- Create automated tests that validate the GPT’s responses against these
scenarios.
27
- Ensure that the GPT’s behaviour aligns with user stories and business
requirements.
For AI Agents:
- Design Thinking:
- Map out the user journey and identify critical interaction points where the AI
Agent will provide value.
- Prototype and test the agent’s interactions in various environments to
ensure robustness and usability.
- Use empathy maps and personas to better understand and anticipate user
needs and behaviours.
- BDD:
- Write behaviour scenarios that describe how the AI Agent should react in
different situations.
- Develop tests that simulate these scenarios to verify the agent’s decision-
making and learning processes.
- Continuously refine the agent’s behaviour based on test results and user
feedback.
While not strictly necessary, Design Thinking and BDD can significantly
enhance the design and development process of custom GPTs and AI Agents
by ensuring a user-centred approach, clear requirements, and continuous
improvement through iterative testing and feedback. These methodologies
help in creating more effective, reliable, and user-friendly AI solutions.
PyTorch
PyTorch is an open-source deep learning framework developed by
Facebook’s AI Research lab. It provides a flexible and efficient platform for
building and training neural networks.
28
2. Extensive Libraries and Tools: PyTorch has a wide range of libraries and
tools that facilitate various deep learning tasks, including natural language
processing (NLP). Libraries like Hugging Face’s Transformers are built on top
of PyTorch, providing pre-trained models and utilities for fine-tuning.
3. GPU Acceleration: PyTorch supports GPU acceleration, which is essential
for handling the large computational requirements of fine-tuning LLMs.
4. Community and Ecosystem: PyTorch has a strong community and
extensive documentation, making it easier to find resources, tutorials, and
support for fine-tuning tasks.
Role in Fine-Tuning:
- Model Customization: Allows users to modify the architecture of LLMs to
better fit specific tasks.
- Efficient Training: Provides efficient backpropagation and optimization
routines to train large models on specialised datasets.
- Experimentation: Facilitates rapid experimentation with different
hyperparameters and training setups due to its flexible framework.
FastAI
FastAI is a deep learning library built on top of PyTorch that aims to simplify
training neural networks by providing high-level abstractions and best
practices.
Role in Fine-Tuning:
- High-Level API: Simplifies the process of loading pre-trained models,
defining custom datasets, and setting up training loops.
29
- Training Utilities: Provides utilities for monitoring training progress, adjusting
learning rates, and saving model checkpoints.
- Best Practices: Encourages the use of best practices in model training, such
as using discriminative learning rates and employing effective data
augmentation techniques.
Both PyTorch and FastAI play crucial roles in the fine-tuning of open-source
LLMs. PyTorch provides the foundational tools and flexibility needed for deep
learning, while FastAI builds on top of PyTorch to offer high-level abstractions
and best practices that streamline the fine-tuning process. Together, they
enable efficient, effective, and user-friendly fine-tuning of large language
models for specific tasks and domains.
22. In this course both PyTorch and FastAI play crucial roles in the fine-
tuning of open-source LLMs, why don't we use TensorFlow instead?
While TensorFlow is a powerful and widely-used deep learning framework,
PyTorch and FastAI offer several advantages that make them particularly
well-suited for fine-tuning open-source Large Language Models (LLMs).
Here’s a detailed comparison highlighting why PyTorch and FastAI might be
preferred over TensorFlow for this specific task:
30
graphs, PyTorch's implementation is often considered more intuitive and
easier to work with for dynamic tasks.
2. Ease of Use:
- PyTorch: Known for its simplicity and clear, Pythonic code, which makes it
easier to learn and use, especially for research and prototyping.
- TensorFlow: While TensorFlow 2.0 improved usability, it is still considered
more complex compared to PyTorch, particularly for newcomers.
1. High-Level Abstractions:
- FastAI: Provides high-level abstractions that simplify complex tasks in
model training and fine-tuning, making it accessible and efficient for users.
These abstractions are built on top of PyTorch, leveraging its flexibility.
- TensorFlow: While Keras (part of TensorFlow) offers high-level APIs,
FastAI’s APIs are often considered more intuitive and tailored towards rapid
experimentation and prototyping.
2. Best Practices:
- FastAI: Incorporates state-of-the-art techniques and best practices in deep
learning, such as learning rate schedules, transfer learning, and data
augmentation, making it easier to achieve high performance with minimal
effort.
- TensorFlow: Requires more manual effort to implement many of these
best practices, which can be a barrier for quick iteration and experimentation.
31
- FastAI: Has an active community and a wealth of educational resources,
including courses and documentation that focus on practical, hands-on
learning.
- TensorFlow: Also has extensive documentation and resources, but the
learning curve can be steeper, especially for those new to deep learning.
Key Characteristics:
1. Real-World Interaction:
- Physical AI systems can perceive their environment through sensors,
process this information, and take appropriate actions using actuators.
2. Embodiment:
- Unlike purely digital AI, Physical AI involves AI embedded in physical
bodies, like humanoid robots, which can navigate and manipulate the physical
world.
3. Understanding Physics:
32
- These AI systems are designed to comprehend and adhere to the physical
laws that govern real-world interactions, such as gravity, friction, and object
dynamics.
4. Human-like Functionality:
- Humanoid robots are a prime example of Physical AI, as they are built to
perform tasks in environments designed for humans, utilising a form factor
that mirrors human anatomy.
5. Data-Driven Training:
- Physical AI leverages vast amounts of real-world data to train AI models,
enabling robots to improve their performance through machine learning and
interaction experiences.
Applications:
- Healthcare:
- Assistive robots that help with patient care, rehabilitation, and surgery.
- Service Industry:
- Robots that perform tasks such as cleaning, delivery, and customer
service.
- Manufacturing:
- Industrial robots that assemble products, manage inventory, and ensure
quality control.
- Exploration:
- Robots designed for exploration in environments like space, underwater, or
disaster zones.
24. What are the different specialisations offered at the end of the program
and what are their benefits?
At the end of the GenEng certification program we offer six specialisations in
different fields:
Healthcare and Medical GenAI: This specialisation will teach students how
to use generative AI to improve healthcare and medical research. This is
relevant to fields such as drug discovery, personalised medicine, and surgery
planning.
33
Benefits:
● Learn how to use generative AI to identify diseases, develop new
drugs, and personalise treatment plans.
● Gain a deeper understanding of the ethical implications of using
generative AI in healthcare.
● Prepare for a career in a growing field with high demand for skilled
professionals.
GenAI for Accounting, Finance, and Banking: This specialisation will teach
students how to use generative AI to improve accounting, finance, and
banking processes. This is relevant to fields such as fraud detection, risk
management, and investment analysis.
Benefits:
● Learn how to use generative AI to automate tasks, identify patterns,
and make predictions.
● Gain a deeper understanding of the financial industry and how
generative AI can be used to improve its processes.
34
● Prepare for a career in a growing field with high demand for skilled
professionals.
GenAI for Engineers: This specialisation will teach students how to use
generative AI to improve engineering design and problem-solving. This is
relevant to fields such as manufacturing, construction, and product
development.
Benefits:
● Learn how to use generative AI to create simulations, optimize designs,
and predict failures.
● Gain a deeper understanding of the engineering design process and
how generative AI can be used to improve it.
● Prepare for a career in a growing field with high demand for skilled
professionals.
GenAI for Sales and Marketing: This specialisation will teach students how
to use generative AI to improve sales and marketing campaigns. This is
relevant to fields such as advertising, public relations, and customer service.
Benefits:
● Learn how to use generative AI to create personalised marketing
messages, generate leads, and track campaign performance.
● Gain a deeper understanding of the latest marketing trends and how
generative AI can be used to improve them.
● Prepare for a career in a growing field with high demand for skilled
professionals.
35
GenAI for Cyber Security:
● Strengthen threat detection and response: GenAI can be used to
rapidly detect and respond to cyber threats by analysing large volumes
of security data in real time, identifying anomalies, and suggesting
appropriate countermeasures.
● Enhance security monitoring and analysis: GenAI can assist
security analysts in monitoring and analysing security logs, automating
threat detection, and providing insights into security risks and
vulnerabilities.
● Improve threat intelligence: GenAI can be used to gather and
analyse threat intelligence from various sources, enabling
organisations to stay informed about the latest threats and trends and
proactively strengthen their security posture.
36