TDSynexx - Virtual Training Session
TDSynexx - Virtual Training Session
GitHub Copilot
Training Session
1
About Us
Mexico
About CloudLabs
HOL Platform Demonstrations
Build, deliver and consume custom Showcase your tech product with impactful
technology labs. scenarios.
Events Hackathons
Managed Support for in-person and virtual Organize impactful technology events with
events for hands-on labs. live progress tracking.
3
Guided Learning + Team-Based Hackathon
Guided Lab : Innovate with GitHub Day 2: Building RAG Solutions with Microsoft Copilot Studio
Copilot (Duration - 6 hours) Implement Retrieval-Augmented Generation (RAG) for
intelligent, context-aware AI copilots.
4
Generative AI using Azure
OpenAI
5
Objectives
Describe Generative AI
6
What is generative AI?
Artificial Intelligence 1950s Artificial Intelligence
the field of computer science that seeks to create
intelligent machines that can replicate or exceed human
intelligence
Machine Learning
Machine Learning 1990s
subset of AI that enables machines to learn from existing
data and improve upon that data to make decisions or
predictions Deep Learning
Generative AI
2020s Generative AI
Create new written, visual, and auditory content given
prompts or existing data.
7
Types of generative AI model
8
Azure OpenAI
Azure OpenAI Service is a cloud-based service from Microsoft that brings OpenAI’s
powerful language and vision models to the Microsoft Azure platform.
Key Features
• Fine-tuning Support: GPT-4, GPT-4o mini, GPT-3.5
• Security: Managed Identity, Private Link, Virtual Network
• Portal Access: Azure Portal & Azure AI Foundry
• Responsible AI: Built-in content filtering and policy enforcement
9
Using prompts to get completions from models
10
Integrating Azure OpenAI into your app
Applications submit prompts to deployed models. Responses are completions.
Three REST API endpoints:
• Completion - model takes an input prompt, and generates one or more predicted completions
• Embeddings - model takes input and returns a vector representation of that input
• ChatCompletion - model takes input in the form of a chat conversation (where roles are specified with the message they send), and
the next chat completion is generated
11
Using the Azure OpenAI REST API
Completion Endpoint https://endpoint.openai.azure.com/openai/deployments/deployment/completions
{
"id": “1234….",
"object": "text_completion",
"created": 1679001781,
{
"model": “gpt-35-turbo",
"prompt": "Your favorite Shakespeare
"choices": [
play is",
{
"max_tokens": 5
"text": "Macbeth",
}
"index": 0,
"logprobs": null,
"finish_reason": "stop"
}
]
}
12
Using the Azure OpenAI REST API
Embedding Endpoint
https://endpoint.openai.azure.com/openai/deployments/deployment/embeddings
{
"object": "list",
"data": [
{
{
"object": "embedding",
"input": "The food was delicious and
"embedding": [
the waiter was very
0.0172990688066482523,
friendly..."
....
}
0.0134544348834753042,
],
"index": 0
}
],
"model": "text-embedding-ada:002"
}
13
Using the Azure OpenAI REST API
Chat Completion Endpoint
https://endpoint.openai.azure.com/openai/deployments/deployment/chat/completions
{ {
"messages":[ "id": “unique_id", "object": "chat.completion",
{"role": "system", "created": 1679001781, "model": "gpt-35-turbo",
"content": "You are an assistant "usage": { "prompt_tokens": 95,
that teaches people about AI."}, "completion_tokens": 84, "total_tokens":
179},
{"role": "user",
"choices": [
"content": "Does Azure OpenAI
{"message":
support multiple languages?"},
{ "role": "assistant",
{"role": "assistant",
"content": "Yes, other Azure
"content": "Yes, Azure OpenAI Cognitive
supports several languages."}, Services also support
{"role": "user", translation…"},
"content": "Do other Cognitive "finish_reason": "stop",
Services support translation?"} "index": 0}
] ]
} }
14
Using language specific SDKs
Pseudo code structure:
Language specific SDKs are available for use in your <include library>
applications, such as Azure OpenAI’s .NET library and
OpenAI’s Python library. <create client>
Both synchronous and asynchronous API versions are <extract response content>
available.
15
What is Prompt Engineering?
16
Using a language model “out of the box”
Prompt
Training data
What is there to see and do in London? (General)
Completion (response)
Using prompt engineering
Prompt
Certainly! Here’s are the top 10 places to visit AI App Language Model
in London: (LLM/SLM)
1. Buckingham Palace: Tour the State
Rooms and witness the Changing of the
Guard ceremony.
2. Tower of London: Explore the historic
castle and see the Crown Jewels.
…
Completion (response)
Prompt Engineering
19
Providing clear instructions
Write a product description for a new water bottle Write a product description for a new water bottle that is 100%
recycled. Be sure to include that it comes in natural colors with no
dyes, and each purchase removes 10 pounds of plastic from our
oceans
Introducing the latest addition to our product line - the innovative Introducing our newest water bottle, the eco-friendly choice for the
and eco-friendly water bottle. Made from high-quality, BPA-free conscious consumer. Made from 100% recycled materials and
materials, this bottle is designed to keep your beverages at the available in natural colors with no dyes, this bottle is not only stylish
perfect temperature for hours. Its sleek and stylish design makes it but also sustainable. Each purchase of this bottle removes 10
the perfect accessory for any outdoor activity or workout session. pounds of plastic from our oceans, making it a responsible choice for
With its convenient carrying handle and leak-proof lid, you can take those who care about the environment. Its durable and leak-proof
it with you wherever you go, without worrying about spills or leaks. design makes it perfect for outdoor activities, while its easy-to-carry
Plus, its easy-to-clean design ensures that you can use it again and handle makes it convenient to take on the go. Join us in our mission
again, making it the perfect choice for anyone looking for a reliable to reduce plastic waste and choose this eco-friendly water bottle for
and sustainable water bottle. your daily hydration needs.
20
RAG: Retrieval Augmented Generation
Prompt
Set up your data source Configure the Azure AI Foundry or your Use the Azure OpenAI model, which now
app to connect to that data source uses your data for grounding
• Use an existing data source, such as an • In the Azure AI Foundry portal, set up • Chat with the AI models like normal
Azure AI Search resource the connection by pointing it to the data • If the data source has relevant
• Use the Azure AI Foundry portal to source information about the prompt, it will
create that data source, if you don’t • In your app, specify the data source in use that data
already have one the prompt parameters • You can specify if the AI model is limited
• When creating the data source, you can • Both configurations allow the search to just your data source
use data already in your account such as resource to augment the prompt
blob storage
GitHub Copilot
24
Objectives
Describe GitHub Copilot account types, benefits, features, and limitations
Describe the code development features of GitHub Copilot in Visual Studio Code
25
GitHub Copilot tools, benefits, and limitations
GitHub Copilot Individual includes the following features:
Code completion in IDEs.
Copilot Chat in IDEs.
Copilot Chat in GitHub Mobile.
Copilot in the CLI.
Block suggestions matching public code.
GitHub Copilot Business adds the following features:
Exclude specified files from Copilot.
Organization-wide policy management.
Audit logs.
Increased GitHub Models rate limits.
GitHub Copilot Enterprise adds the following features:
Copilot Chat in GitHub.com.
Copilot Chat skills in IDEs.
Copilot pull request summaries.
Copilot knowledge bases.
26
GitHub Copilot tools, benefits, and limitations
Limitations:
LLM is trained on a limited scope.
LLM could contain coding biases.
Copilot suggestions could include: Security risks, Public code, Inaccurate
code.
27
GitHub Copilot extension: GitHub Copilot Chat extension:
• The GitHub Copilot extension supports auto • The GitHub Copilot Chat extension provides AI-enabled chat
completions in the code editor. features that are focused on developer scenarios.
28
AI assistance features of the GitHub Copilot Chat extension
The GitHub Copilot Chat extension uses developer input to complete specific tasks. The user interface
accepts either commands or natural language prompts to:
Explain and/or document code.
Answer coding questions.
Generate new code (including unit test cases).
Suggest bug fixes.
Suggest improvements to existing code.
GitHub Copilot Chat uses prompts or other input to generate suggestions. To create a prompt:
Write a natural language prompt that describes your intent.
Specify the code or code files within your codebase that provide context for the prompt.
29
AI assistance features of the GitHub Copilot Chat extension
GitHub Copilot Chat adds the following GitHub Copilot Chat’s Inline Chat feature.
interactive features to the Visual Studio
Code user interface:
Chat View: a panel in the user interface that’s
used to display a chat conversation with the AI
pair programmer.
Inline Chat: a text dialog that developers can
use to enter prompts inline as they're coding.
Smart Actions: a command interface used to
complete common tasks without having to write
a prompt.
Quick Chat: a chat window that enables a quick,
interactive conversation with the AI.
30
AI assistance features of the GitHub Copilot Chat extension
Your prompts for GitHub Copilot Chat communicate two things:
Intent: The natural language text that describes what you want to achieve.
Context: The area of focus associated with the specified intent.
GitHub Copilot Chat recognizes specialized prompt entries that can be used to specify context:
Chat Participants: Help you indicate the scope and intent of your prompt.
Slash Commands: Shortcuts to specific functionality provided by the chat participant.
Chat Variables: Provide domain-specific context.
Basic guidelines for writing prompts:
When writing the intent portion of a prompt: Be specific and keep it simple.
When defining the context of a prompt: Use chat participants, slash commands, and chat variables.
31
Code development features of GitHub Copilot
32
GitHub Copilot best practices
33
GitHub Copilot best practices
Consider the following items when writing prompts:
Start general, then get specific.
Give examples.
Break complex tasks into simpler tasks.
Avoid ambiguity.
Indicate relevant code using chat participants, slash commands, and chat variables .
Experiment and iterate.
Keep the chat history relevant.
Follow good coding practices.
Always review code suggestions and only accept suggestions that meet your requirements.
34
Generate code using code line completions
35
Generate code using GitHub Copilot Chat
36
Generate code using GitHub Copilot Chat
37
Generate code using GitHub Copilot Chat
38
GitHub Copilot support for code improvements
39
Code quality improvements
Factors to consider when investigating code quality improvements:
• Improve code readability.
• Reduce code complexity.
• Ensure code modularity and reusability.
Example prompts for improving code quality:
• "Suggest refactoring updates to enhance the modularity of the User class."
• "Provide design pattern suggestions to improve the internal structure of the OrderService class."
• "Recommend code style enhancements to make the ProductController class more readable."
40
Code reliability and performance
Relationship between reliability and performance:
• Improvements in one area can positively affect the other.
• Deficiencies in one area can lead to problems with the other.
A balance between reliability and performance leads to:
• Applications that meet functional requirements.
• Applications that deliver a seamless and efficient user experience.
Factors to consider when investigating code reliability:
• Potential issues.
• Testing.
• Exception handling.
Factors to consider when investigating code performance:
• Optimize algorithms.
• Avoid unnecessary computations.
• Use multithreading and asynchronous programming.
• Minimize I/O operations.
41
Course Activation Walkthrough
42
How to access the course
43
How to access the course
44
How to access the course
Choose on Redeem Code Tab and enter the code: TDSYNEXXGENAICOPILOTMUM and click on
Redeem.
45
How to access the course
46
How to access the course
From the course, Click on Get Started button to access the hands-on labs.
47
Launch the Lab
To launch the lab, Click on Launch button to start the hands-on labs.
48
Thank you!
49