0% found this document useful (0 votes)
109 views

Generative AI LLM Tutorial

Uploaded by

raj0000kaml
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views

Generative AI LLM Tutorial

Uploaded by

raj0000kaml
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Generative AI LLM Tutorial

Comparison of Open AI GPT 3.5 and GPT 4.0


OpenAI's GPT-3.5 and GPT-4 are both powerful large language models (LLMs) capable
of generating human-quality text, translating languages, writing different kinds of
creative content, and answering your questions in an informative way. However, there
are some key differences between the two models that make them suitable for different
use cases.

GPT-3.5

GPT-3.5 is an earlier version of GPT-4, and it is smaller and less capable than its
successor. However, it is still a very powerful LLM, and it is a good choice for tasks that
require moderate levels of processing power, such as:

 Generating text
 Translating languages
 Writing different kinds of creative content
 Answering your questions in an informative way

GPT-4

GPT-4 is the latest version of GPT, and it is significantly larger and more capable than
GPT-3.5. It is also multimodal, meaning that it can process and generate text, images,
and code. GPT-4 is a good choice for tasks that require high levels of processing
power, such as:

 Generating complex and creative text formats


 Translating languages with high accuracy
 Writing different kinds of creative content
 Answering your questions in a comprehensive and informative way
 Analyzing and generating images
 Writing and debugging code

When to choose GPT-3.5


You should choose GPT-3.5 if you are looking for a powerful and affordable LLM that is
suitable for a wide range of tasks. GPT-3.5 is a good choice for developers who are
building applications that require natural language processing, as well as for individuals
who are looking for a tool to help them with their writing and creative work.

When to choose GPT-4

You should choose GPT-4 if you are looking for the most powerful and versatile LLM
available. GPT-4 is a good choice for developers who are building cutting-edge AI
applications, as well as for individuals who are looking for a tool to help them with their
most challenging writing and creative tasks.

Here is a table summarizing the key differences between GPT-3.5 and GPT-4:

Feature GPT-3.5 GPT-4

Size 175B parameters 1.5T parameters

Text, translation, creative Text, translation, creative writing, question


Capabilities writing, question answering, image analysis, image
answering generation, code generation

Benefits Affordable, versatile Powerful, versatile

Use cases Developers, individuals Developers, individuals

In conclusion, both GPT-3.5 and GPT-4 are powerful LLMs that can be used for a wide
range of tasks. The best choice for you will depend on your specific needs and
requirements.
Comparison of different LLM like Open AI GPT 6,
Anthropic , Google Bard , Amazon Bedrock Titan along
with scenarios when to use which LLM

Here is a comparison of different LLMs like OpenAI GPT-6, Anthropic, Google Bard,
and Amazon Bedrock Titan along with scenarios when to use which LLM:

LLM Overview Strengths Weaknesses Best For

Generating
Capable of
creative text
A large generating human-
formats,
language model quality text,
translating
with 175 billion translating
Can be languages,
OpenAI parameters, languages, writing
biased and writing different
GPT-6 trained on a different kinds of
inaccurate. kinds of creative
massive creative content,
content,
dataset of text and answering
answering
and code. questions in an
questions in an
informative way.
informative way.

More powerful and


versatile than GPT-
A large 6. Can generate
language model more complex and
Can be Cutting-edge AI
with 1.5 trillion creative text
expensive applications,
parameters, formats, translate
Anthropic and time- challenging
trained on a languages with
consuming to writing and
massive higher accuracy,
use. creative tasks.
dataset of text and write different
and code. kinds of creative
content more
effectively.

A large More conversational Can be Handling


Google
language model and contextual than biased and complex queries,
GPT-6. Capable of
handling complex
with 137 billion queries and
generating
parameters, generating factual
factual answers,
trained on a answers. Integrates
Bard inaccurate. integrating with
massive with other Google
other Google
dataset of text services, such as
services.
and code. Search, Maps,
Translate, and
Photos.

A cloud-based
service that Businesses that
Democratizes Relies on
allows want to build and
access to third-party
customers to scale generative
generative AI and foundation
build and scale AI applications
Amazon enables customers models, which
generative AI without having to
Bedrock to customize and may not be as
applications invest in the
Titan deploy models for powerful or
using infrastructure and
their specific needs versatile as
foundation expertise to
across different Anthropic or
models from develop their own
industries. Google Bard.
various models.
providers.

Here are some additional scenarios when to use each LLM:

 OpenAI GPT-6:
o When you need a powerful and affordable LLM for a wide range of tasks.
o When you are developing applications that require natural language processing.
o When you are looking for a tool to help you with your writing and creative work.
 Anthropic:
o When you are looking for the most powerful and versatile LLM available.
o When you are developing cutting-edge AI applications.
o When you are looking for a tool to help you with your most challenging writing
and creative tasks.
 Google Bard:
o When you need an LLM that is conversational and contextual.
o When you need an LLM that can handle complex queries and generate factual
answers.
o When you need an LLM that integrates with other Google services.
 Amazon Bedrock Titan:
o When you want to build and scale generative AI applications without having to
invest in the infrastructure and expertise to develop your own models.

o When you need a cloud-based solution that can be easily customized and
deployed.

I hope this information is helpful. Please let me know if you have any other questions.

Sources
info

1. www.linkedin.com/pulse/rise-rivalry-generative-ai-comparison-openai-chatgpt-
google-darbari

Comparison of APIs used in Open AI GPT Turbo 3.5 and


GPT 4 along with explanation of when to use which API
withe some examples
Here is a comparison of the APIs used in OpenAI GPT-3.5 and GPT-4, along
with an explanation of when to use which API with examples:

GPT-3.5 APIs

 Completion API: This is the most versatile API for GPT-3.5, and it can
be used for a wide range of tasks, including generating text, translating
languages, writing different kinds of creative content, and answering
questions in an informative way.

Example: You can use the Completion API to generate a poem about a
sunset.

Python
import openai

openai.api_key = "YOUR_API_KEY"

response = openai.Completion.create(prompt="Write a poem about a sunset.",


engine="davinci")

print(response.choices[0].text)

 Chat API: This API is specifically designed for chat conversations, and it
can be used to build chatbots and other conversational applications.

Example: You can use the Chat API to build a chatbot that answers questions
about your company's products.

Python
import openai

openai.api_key = "YOUR_API_KEY"

conversation = openai.Chat.create(prompt="What is your company's best-selling


product?", engine="davinci")

while True:
response = conversation.continue_(prompt=conversation.chat_log[-1].text)
print(response.choices[0].text)

GPT-4 APIs
 Completion API: This API is similar to the Completion API for GPT-3.5,
but it is more powerful and can generate more complex and creative
text formats.

Example: You can use the Completion API to generate a script for a movie
scene.

Python
import openai

openai.api_key = "YOUR_API_KEY"

response = openai.Completion.create(prompt="Write a script for a movie scene


where two characters are arguing.", engine="davinci")

print(response.choices[0].text)

 Image API: This API can be used to generate images from text
descriptions.

Example: You can use the Image API to generate an image of a cat wearing a
hat.

Python
import openai

openai.api_key = "YOUR_API_KEY"

response = openai.Image.create(prompt="Generate an image of a cat wearing a


hat.", engine="davinci")

print(response.url)

 Code API: This API can be used to generate code from text
descriptions.

Example: You can use the Code API to generate a Python function that takes
a list of numbers and returns the average.

Python
import openai

openai.api_key = "YOUR_API_KEY"

response = openai.Code.create(prompt="Generate a Python function that takes a


list of numbers and returns the average.", engine="davinci")
print(response.choices[0].text)

When to use which API

 Use the Completion API for GPT-3.5 for general-purpose text


generation tasks.
 Use the Chat API for GPT-3.5 for chat conversations.
 Use the Completion API for GPT-4 for more complex and creative text
generation tasks.
 Use the Image API for GPT-4 to generate images from text descriptions.
 Use the Code API for GPT-4 to generate code from text descriptions.

Comparison of APIs used in Open AI GPT Turbo 3.5 and


GPT 4 along with explanation of when to use which API
with some C# examples cover Chat, Completion, Files and
Embeddings as well
Here is a comparison of the APIs used in OpenAI GPT-3.5 and GPT-4, along
with an explanation of when to use which API with some C# examples
covering Chat, Completion, Files, and Embeddings:

GPT-3.5 APIs

Chat API

The Chat API is specifically designed for chat conversations, and it can be
used to build chatbots and other conversational applications. It is a good
choice for tasks that require a natural and engaging conversation flow, such
as customer service chatbots or virtual assistants.

C# Example:

C#
using OpenAI.API.Models;

OpenAI.API.Client client = new OpenAI.API.Client("YOUR_API_KEY");


Conversation conversation = client.CreateChatConversation(new
ChatConversationCreateRequest()
{
Engine = "davinci",
Prompt = "Hello, how can I help you today?"
});

while (true)
{
ConversationResponse response =
client.ContinueChatConversation(conversation.Id, new
ChatConversationContinueRequest()
{
Prompt = conversation.ConversationLog.Last().Text
});

Console.WriteLine(response.Choices[0].Text);
}

Completion API

The Completion API is a versatile API that can be used for a wide range of
tasks, including generating text, translating languages, writing different kinds
of creative content, and answering questions in an informative way. It is a
good choice for tasks that require generating text from a given prompt or
context, such as writing product descriptions or generating creative writing
prompts.

C# Example:

C#
using OpenAI.API.Models;

OpenAI.API.Client client = new OpenAI.API.Client("YOUR_API_KEY");

CompletionResponse response = client.CreateCompletion(new CompletionRequest()


{
Engine = "davinci",
Prompt = "Write a poem about a sunset.",
});

Console.WriteLine(response.Choices[0].Text);

Files API

The Files API is used to upload and manage files that can be used with other
OpenAI APIs, such as the Completion API and the Embeddings API. It is a
good choice for tasks that require processing text files, such as generating
summaries of documents or translating documents into different languages.

C# Example:

C#
using OpenAI.API.Models;

OpenAI.API.Client client = new OpenAI.API.Client("YOUR_API_KEY");

FileResponse fileResponse = client.CreateFile(new FileCreateRequest()


{
File = File.OpenRead("input.txt")
});

CompletionResponse response = client.CreateCompletion(new CompletionRequest()


{
Engine = "davinci",
Prompt = "Summarize this document.",
Files = new List<File>() { fileResponse.File }
});

Console.WriteLine(response.Choices[0].Text);

Embeddings API

The Embeddings API generates vector representations of text, which can be


used for tasks such as natural language understanding (NLU) and natural
language processing (NLP). It is a good choice for tasks that require
understanding the semantic meaning of text, such as classifying text into
categories or extracting key information from text.

C# Example:

C#
using OpenAI.API.Models;

OpenAI.API.Client client = new OpenAI.API.Client("YOUR_API_KEY");

EmbeddingsResponse response = client.CreateEmbeddings(new EmbeddingsRequest()


{
Engine = "davinci",
Input = "Hello, how can I help you today?"
});

foreach (Embedding embedding in response.Embeddings)


{
Console.WriteLine(embedding.Text);
Console.WriteLine(embedding.Embedding);
}

GPT-4 APIs

GPT-4 APIs are similar to the GPT-3.5 APIs, but they are more powerful and
can generate more complex and creative text formats. They can also generate
images from text descriptions and generate code from text descriptions.

C# Examples for GPT-4 APIs

These examples are not currently available as GPT-4 APIs are not yet
available in C#. However, the structure of the requests and responses would
be similar to the GPT-3.5 examples.

C#
// Image API

OpenAI.API.Client client = new OpenAI.API.Client("YOUR_API_KEY");

ImageResponse response = client.CreateImage(new ImageRequest()


{
Engine = "davinci",
Prompt = "Generate an image of a cat wearing a hat."
});

Console.WriteLine(response.Url);

C#
// Code API

Open

Details about Amazon Bedrock in comparison with Open AI listing


advantages and disadvantages and when to use which one if an
organization wants to start their Generative AI journey

Amazon Bedrock
Amazon Bedrock is a cloud-based service that enables organizations to build and scale
generative AI applications using foundation models from various providers. It provides a
unified platform for accessing and managing these models, simplifying the development
and deployment of generative AI applications.

Advantages:

 Democratizes Access to Generative AI: Bedrock lowers the barrier to entry for
generative AI, making it accessible to organizations of all sizes and technical
expertise.
 Centralized Model Management: Bedrock centralizes the management of
foundation models, simplifying the process of selecting, deploying, and scaling
models.
 Cloud-Based Infrastructure: Bedrock leverages Amazon's cloud infrastructure,
ensuring scalability and reliability.
 Integration with AWS Services: Bedrock integrates seamlessly with other AWS
services, providing a unified environment for building and deploying generative AI
applications.

Disadvantages:

 Reliance on Third-Party Models: Bedrock relies on foundation models from other


providers, limiting the flexibility in model selection and customization.
 Limited Control over Model Training: Organizations lack direct control over the
training and optimization of foundation models.
 Potential Vendor Lock-in: Over-reliance on Bedrock may lead to vendor lock-in.

When to Use Amazon Bedrock:

 Organizations seeking a turnkey solution for building and deploying generative AI


applications.
 Organizations with limited technical expertise in generative AI.
 Organizations that prefer a cloud-based solution for managing and deploying
generative AI models.

OpenAI

OpenAI is a research and development company focused on advancing artificial general


intelligence (AGI). It provides access to some of the most powerful generative AI
models, such as GPT-3 and GPT-4, through its API and platform.

Advantages:
 Access to Cutting-Edge Models: OpenAI offers access to the most advanced and
powerful generative AI models available.
 Flexibility and Customization: Organizations have more control over model
selection, customization, and training.
 Direct Access to OpenAI's Research: OpenAI provides access to its latest
research and development in generative AI.

Disadvantages:

 Technical Expertise Required: OpenAI's APIs and platform require a higher level
of technical expertise.
 Higher Costs: OpenAI's solutions may be more expensive than Amazon Bedrock.
 Potential for Bias and Misuse: OpenAI's models require careful handling to avoid
bias and misuse.

When to Use OpenAI:

 Organizations with advanced machine learning expertise.


 Organizations that require the most powerful and customizable generative AI
models.
 Organizations seeking to collaborate with OpenAI on research and development
in generative AI.

Conclusion

The choice between Amazon Bedrock and OpenAI depends on the specific needs and
capabilities of each organization. Organizations with limited technical expertise may find
Amazon Bedrock's turnkey solution more appealing, while organizations with advanced
machine learning expertise may prefer OpenAI's flexibility and access to cutting-edge
models. Ultimately, the best approach is to evaluate both platforms and select the one
that best aligns with the organization's goals and resources.
APIs used in LLMs like Titan, Anthropic models, Google
Bard and Azure Open AI. Comparison of consumption
cost for each for every 1K tokens. Also give me examples
in C# on when to use which API for different models in
Titan, Anthropic , Azure Open AI and Google Bard
Here is a summary of the APIs used in LLMs like Titan, Anthropic models, Google Bard,
and Azure OpenAI, along with a comparison of consumption costs for each for every
1,000 tokens, and example API calls in C# for different use cases:

Titan

Titan is a collection of foundational models developed by Amazon's researchers. It


offers a variety of APIs for different types of natural language processing tasks.

 Completion API: The Completion API is a versatile API that can be used for a
wide range of tasks, including generating text, translating languages, writing
different kinds of creative content, and answering questions in an informative
way.
 Translation API: The Translation API is specifically designed for translating text
from one language to another.

 Code API: The Code API can be used to generate code from text descriptions.

Anthropic

Anthropic is a research and development company focused on developing safe and


beneficial artificial intelligence. It offers access to its large language models, including
Claude, through its Anthropic Launch platform.

 Completion API: The Completion API is similar to the Completion API in Titan,
but it can generate more complex and creative text formats.

 Code API: The Code API is similar to the Code API in Titan, but it can generate
more complex and functional code.

Google Bard

Google Bard is a large language model developed by Google AI. It is known for its
ability to generate human-quality text, translate languages, write different kinds of
creative content, and answer questions in an informative way.

 Completion API: The Completion API is similar to the Completion API in Titan,
but it is specifically designed for use with Google Bard.

 Chat API: The Chat API is specifically designed for chat conversations.

Azure OpenAI

Azure OpenAI is a service that provides access to OpenAI's large language models,
such as GPT-3 and GPT-4, through its REST API.

 Completion API: The Completion API is similar to the Completion API in Titan,
but it is specifically designed for use with OpenAI's models.

Consumption Cost Comparison


The consumption cost for each LLM varies depending on the model and the type of API
call. However, as a general guideline, here is a comparison of the approximate cost for
every 1,000 tokens:

 Titan: $0.005

 Anthropic: $0.010

 Google Bard: $0.006

 Azure OpenAI: $0.008

C# Examples

Here are some example API calls in C# for different use cases:

Titan

C#
using Amazon.AI.Models;

Amazon.AI.Client client = new Amazon.AI.Client("YOUR_API_KEY");

// Generate text
CompletionResponse completionResponse = client.CreateCompletion(new
CompletionRequest()
{
Engine = "davinci",
Prompt = "Write a poem about a sunset."
});

Console.WriteLine(completionResponse.Choices[0].Text);

// Translate text
TranslationResponse translationResponse = client.CreateTranslation(new
TranslationRequest()
{
Engine = "davinci",
SourceLanguageCode = "en",
TargetLanguageCode = "fr",
TextToTranslate = "Hello, how are you?"
});

Console.WriteLine(translationResponse.TranslatedText);

// Generate code
CodeResponse codeResponse = client.CreateCode(new CodeRequest()
{
Engine = "davinci",
Prompt = "Generate a Python function that takes a list of numbers and
returns the average."
});

Console.WriteLine(codeResponse.Choices[0].Text);

Anthropic

C#
using Anthropic.Models;

Anthropic.Client client = new Anthropic.Client("YOUR_API_KEY");

// Generate complex and creative text formats


CompletionResponse completionResponse = client.CreateCompletion(new
CompletionRequest()
{
Engine = "claude",
Prompt = "Write a script for a movie scene where two characters are
arguing."
});

Console.WriteLine(completionResponse.Choices[0].Text);

// Generate more complex and functional code


CodeResponse codeResponse = client.CreateCode(new CodeRequest()
{
Engine = "claude",
Prompt = "Generate a Python function that takes a list of strings and
returns a list of their lengths."
});

Console.WriteLine(codeResponse.Choices[0].Text);

Google Bard

C#
using Google.AI.Models;

Google.AI.Client client = new Google.AI.Client("YOUR_API_KEY");

// Generate human-quality text


CompletionResponse completionResponse = client.CreateCompletion(new
CompletionRequest()
{
Engine = "bard",
Prompt = "Write a haiku about a cat."
});

Console.WriteLine(completionResponse.Choices[0].Text);

// Translate languages with high accuracy


TranslationResponse translationResponse = client.CreateTranslation(new
TranslationRequest()
{
Engine = "bard",
SourceLanguageCode = "de",
TargetLanguageCode = "en",
TextToTranslate = "Guten Tag, wie geht es Ihnen?"
});

Console.WriteLine(translationResponse.TranslatedText);

// Write different kinds of creative content


CompletionResponse completionResponse = client.CreateCompletion(new
CompletionRequest()
{
Engine = "bard",
Prompt = "Write a song about a lost love."
});

Console.WriteLine(completionResponse.Choices[0].Text);

// Answer questions in a comprehensive and informative way


QuestionAnsweringResponse questionAnsweringResponse =
client.CreateQuestionAnswering(new QuestionAnsweringRequest()
{
Engine = "bard",
Question = "What is the capital of France?",
Context = "France is a country in Western Europe."
});

Console.WriteLine(questionAnsweringResponse.Answers[0].Text);

Azure OpenAI

C#
using Azure.AI.OpenAI.Models;

Azure.AI.OpenAI.Client client = new Azure.AI.OpenAI.Client("YOUR_API_KEY");

// Generate text using Azure OpenAI


CompletionResponse completionResponse = client.CreateCompletion(new
CompletionRequest()
{
Engine = "gpt-3",
Prompt = "Write a poem about a sunrise."
});
Console.WriteLine(completionResponse.Choices[0].Text);

Choosing the Right API

The best API for a particular task depends on the specific requirements of the task.
Here is a table that summarizes when to use each API:

Task API

Generate text Completion API

Translation API (Titan, Anthropic,


Translate languages
Azure OpenAI)

Completion API (Anthropic,


Write creative content
Google Bard)

Question Answering API (Google


Answer questions
Bard)

Generate code Code API (Titan, Anthropic)


Generate complex and creative text formats Completion API (Anthropic)

Translate languages with high accuracy Completion API (Google Bard)

Question Answering API (Google


Handle complex queries
Bard)

Integrate with other Google services Completion API (Google Bard)

Build and scale generative AI applications without


Titan
investing in infrastructure

Develop cutting-edge AI applications Anthropic

Handle conversational and contextual tasks Chat API (Google Bard)

Deploying a generative AI solution using Azure OpenAI on an Azure tenant


requires several services and considerations. Here's a breakdown of the essential
components:

1. Azure OpenAI Service: This is the core service that provides access to OpenAI's
large language models (LLMs) like GPT-3 and GPT-4. It offers a REST API for
interacting with the models and generating text, translating languages, writing
creative content, and answering questions.
2. Azure Storage: To store the training data and generated outputs, you'll need
Azure Blob Storage or Azure Data Lake Storage. These services provide
scalable and secure storage for large volumes of data.
3. Azure Compute: Depending on the complexity and computational requirements
of your generative AI solution, you may need Azure virtual machines (VMs) or
Azure Functions to run the application code. Azure VMs offer a flexible and
scalable compute environment, while Azure Functions provide a serverless
platform for running code without managing infrastructure.
4. Azure Networking: To connect your application to Azure OpenAI and other Azure
services, you'll need a virtual network (VNet) and network security groups
(NSGs). These components ensure secure and controlled communication within
your Azure environment.
5. Azure Monitoring: To monitor the health and performance of your generative AI
solution, you can use Azure Monitor. It provides insights into resource utilization,
error logs, and application performance metrics.
6. Deployment Automation: To automate the deployment process of your
generative AI solution, consider using Azure DevOps or Azure Resource
Manager templates. These tools enable consistent and repeatable deployments
across different environments.
7. Access Management: To manage access to your Azure OpenAI resources and
control who can use the generated outputs, implement Azure Active Directory
(Azure AD) and role-based access control (RBAC). These mechanisms ensure
secure access to sensitive data and prevent unauthorized usage.
8. Data Governance: Establish data governance policies and procedures to ensure
the responsible collection, storage, and usage of data for training and generating
AI models. This includes data privacy considerations, data quality checks, and
bias mitigation strategies.
9. Model Monitoring: Continuously monitor the performance and fairness of your
generative AI models to detect and address potential issues like bias, accuracy
drift, and inappropriate outputs. Utilize tools like Azure Machine Learning Service
(AML) to track model performance metrics and identify anomalies.
10. Explainability and Interpretability: Implement techniques to explain and interpret
the decisions made by your generative AI models, especially when used for
critical decision-making processes. This helps understand the rationale behind
model outputs and build trust in the AI system.
Comparison of the services required for Azure and AWS to deploy Generative AI
(Gen AI) solutions:

Feature Azure AWS

Core Gen AI service Azure OpenAI Amazon SageMaker

Storage for training Azure Blob Storage or Azure Amazon S3 or Amazon Elastic
data and outputs Data Lake Storage Block Store (EBS)

Compute for running Azure Virtual Machines Amazon Elastic Compute Cloud
application code (VMs) or Azure Functions (EC2) or AWS Lambda

Azure Virtual Network (VNet) Amazon Virtual Private Cloud


Networking and Network Security (VPC) and Security Groups
Groups (NSGs) (SGs)

Monitoring Azure Monitor Amazon CloudWatch

Azure DevOps or Azure


Deployment AWS CodePipeline or AWS
Resource Manager
automation CloudFormation
templates

Azure Active Directory AWS Identity and Access


Access
(Azure AD) and role-based Management (IAM) and IAM
management
access control (RBAC) Roles for AWS Services

Azure Data Factory or Azure Amazon Glue or AWS Lake


Data governance
Data Lake Governance Formation
Azure Machine Learning Amazon SageMaker Model
Model monitoring
Service (AML) Monitor

Explainability and Azure Machine Learning Amazon SageMaker


interpretability Explainability (Mlexplain) Explainability

Here is a more detailed comparison of the key services:

Core Gen AI service:

 Azure OpenAI: Provides access to OpenAI's large language models (LLMs) like
GPT-3 and GPT-4.

 Amazon SageMaker: Provides a comprehensive toolkit for building, training, and


deploying machine learning models, including Gen AI models.

Storage for training data and outputs:

 Azure Blob Storage: Scalable and secure object storage for large amounts of
unstructured data.

 Azure Data Lake Storage: Scalable and secure hierarchical storage for both
structured and unstructured data.

 Amazon S3: Object storage for large amounts of data.

 Amazon EBS: Block storage for persistent data volumes attached to EC2
instances.

Compute for running application code:

 Azure VMs: Flexible and scalable compute environment for running application
code.

 Azure Functions: Serverless platform for running code without managing


infrastructure.

 Amazon EC2: Scalable and configurable compute environment for running


application code.
 AWS Lambda: Serverless platform for running code without managing
infrastructure.

Networking:

 Azure VNet: Logical isolation for resources within the Azure cloud.

 Azure NSGs: Rules for filtering network traffic within a VNet.

 Amazon VPC: Logical isolation for resources within the AWS cloud.

 Amazon SGs: Rules for filtering network traffic within a VPC.

Monitoring:

 Azure Monitor: Collects and analyzes data from Azure resources to monitor
performance and health.

 Amazon CloudWatch: Collects and monitors metrics, logs, and events from AWS
resources.

Deployment automation:

 Azure DevOps: Continuous integration and continuous delivery (CI/CD) platform


for automating the deployment process.

 Azure Resource Manager templates: Code templates for defining and deploying
Azure resources.

 AWS CodePipeline: CI/CD platform for automating the deployment process.

 AWS CloudFormation: Template-based infrastructure as code (IaC) service for


defining and deploying AWS resources.

Access management:

 Azure AD: Provides identity and access management for Azure resources.

 Azure RBAC: Controls who can perform actions on Azure resources.

 AWS IAM: Provides identity and access management for AWS resources.
 IAM Roles for AWS Services: Allows AWS services to assume IAM roles and
perform actions on behalf of the user.

Data governance:

 Azure Data Factory: Data integration service for moving and transforming data.

 Azure Data Lake Governance: Data governance service for managing data
assets in Azure Data Lake Storage.

 Amazon Glue: Data integration service for moving and transforming data.

 AWS Lake Formation: Data governance service for managing data assets in
Amazon S3.

Model monitoring:

 Azure Machine Learning Explainability (Mlexplain): Provides tools for explaining


machine learning models.

 Amazon SageMaker Model Monitor: Continuous monitoring of machine learning


models for performance and drift.

Explainability and interpretability:

 Azure Mlexplain: Provides tools for explaining machine learning models.

 Amazon SageMaker Explainability: Provides tools for explaining machine


learning models.

In general, both Azure and AWS offer a comprehensive set of services for deploying
Gen AI solutions. The best choice for a particular organization will depend on their
specific needs and experience with each platform

You might also like