0% found this document useful (0 votes)
3 views49 pages

TDSynexx - Virtual Training Session

The document outlines a training session focused on Generative AI and GitHub Copilot, detailing the objectives, tools, and features of each technology. It covers the integration of Azure OpenAI services, prompt engineering, and the capabilities of GitHub Copilot in code development. The training includes hands-on labs and a team-based hackathon to enhance learning and practical application of these technologies.

Uploaded by

sharadarm19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views49 pages

TDSynexx - Virtual Training Session

The document outlines a training session focused on Generative AI and GitHub Copilot, detailing the objectives, tools, and features of each technology. It covers the integration of Azure OpenAI services, prompt engineering, and the capabilities of GitHub Copilot in code development. The training includes hands-on labs and a team-based hackathon to enhance learning and practical application of these technologies.

Uploaded by

sharadarm19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

TDSynexx: GenAI and

GitHub Copilot
Training Session

1
About Us

Cloud Born Products Segments Global

Founded in 2016 to provide SaaSify ISVs United States


solutions and services CloudLabs Cloud Partners Canada
around Cloud Technologies
CSP Control Center Educational Institutes India

Mexico
About CloudLabs
HOL Platform Demonstrations
Build, deliver and consume custom Showcase your tech product with impactful
technology labs. scenarios.

Events Hackathons
Managed Support for in-person and virtual Organize impactful technology events with
events for hands-on labs. live progress tracking.

Pre-Built Labs Cloud Orchestrator


Authorized Lab hoster for Microsoft and Automate your cloud environments
Check Point. provisioning and lifecycle.

Automation 24*7 Live Support


ARM Templates, PowerShell, CLI, Python, Instant support for any technical and non-
Azure & Graph APIs tech problems.

3
Guided Learning + Team-Based Hackathon

Pre-Hackathon Prep – Guided Labs Hackathon – Team-Based Approach

On the day of the hackathon, Meet the Instructors and proctors


The pre-prep session for hackathon will include a
who will:
virtual instructor-led session covering Azure Generative
• Provide an overview of the technology stack
AI, Azure OpenAI Services, and GitHub Copilot.
• Introduce the hackathon problem statement
• Assist you with troubleshooting
• Support your teams in building and presenting the solutions

Get Access to Guided Labs Day 1: Activate GenAI using Azure


Develop Generative AI solutions with Azure Explore Azure OpenAI, prompt engineering, and AI model
integration to lay the foundation for advanced AI applications.
OpenAI Service - (Duration - 8 hours)

Guided Lab : Innovate with GitHub Day 2: Building RAG Solutions with Microsoft Copilot Studio
Copilot (Duration - 6 hours) Implement Retrieval-Augmented Generation (RAG) for
intelligent, context-aware AI copilots.

4
Generative AI using Azure
OpenAI

5
Objectives
Describe Generative AI

Describe Azure OpenAI and its components

Integrating Azure OpenAI into your app

Dive deep into Prompt Engineering

Understand RAG - Retrieval Augmented Generation

6
What is generative AI?
Artificial Intelligence 1950s Artificial Intelligence
the field of computer science that seeks to create
intelligent machines that can replicate or exceed human
intelligence
Machine Learning
Machine Learning 1990s
subset of AI that enables machines to learn from existing
data and improve upon that data to make decisions or
predictions Deep Learning

Deep Learning 2010s


a machine learning technique in which layers of neural
networks are used to process data and make decisions

Generative AI
2020s Generative AI
Create new written, visual, and auditory content given
prompts or existing data.

7
Types of generative AI model

8
Azure OpenAI

Azure OpenAI Service is a cloud-based service from Microsoft that brings OpenAI’s
powerful language and vision models to the Microsoft Azure platform.

Powerful AI Models Access


• Offers REST API and SDKs (Python, C#, JS, Java, Go) for models like GPT-4, GPT-4
Turbo with Vision, GPT-4o, GPT-3.5 Turbo, and Embeddings.
• Use cases: content generation, summarization, semantic search, image
understanding, and code generation.

Key Features
• Fine-tuning Support: GPT-4, GPT-4o mini, GPT-3.5
• Security: Managed Identity, Private Link, Virtual Network
• Portal Access: Azure Portal & Azure AI Foundry
• Responsible AI: Built-in content filtering and policy enforcement

9
Using prompts to get completions from models

10
Integrating Azure OpenAI into your app
Applications submit prompts to deployed models. Responses are completions.
Three REST API endpoints:
• Completion - model takes an input prompt, and generates one or more predicted completions
• Embeddings - model takes input and returns a vector representation of that input
• ChatCompletion - model takes input in the form of a chat conversation (where roles are specified with the message they send), and
the next chat completion is generated

ChatCompletion will be the endpoint we focus on for this course

Use Completion with GPT-3 based models


Use Embeddings with text-embedding models
Use ChatCompletion with GPT-35-Turbo and later models

11
Using the Azure OpenAI REST API
Completion Endpoint https://endpoint.openai.azure.com/openai/deployments/deployment/completions

{
"id": “1234….",
"object": "text_completion",
"created": 1679001781,
{
"model": “gpt-35-turbo",
"prompt": "Your favorite Shakespeare
"choices": [
play is",
{
"max_tokens": 5
"text": "Macbeth",
}
"index": 0,
"logprobs": null,
"finish_reason": "stop"
}
]
}

12
Using the Azure OpenAI REST API
Embedding Endpoint
https://endpoint.openai.azure.com/openai/deployments/deployment/embeddings
{
"object": "list",
"data": [
{
{
"object": "embedding",
"input": "The food was delicious and
"embedding": [
the waiter was very
0.0172990688066482523,
friendly..."
....
}
0.0134544348834753042,
],
"index": 0
}
],
"model": "text-embedding-ada:002"
}

13
Using the Azure OpenAI REST API
Chat Completion Endpoint
https://endpoint.openai.azure.com/openai/deployments/deployment/chat/completions

{ {
"messages":[ "id": “unique_id", "object": "chat.completion",
{"role": "system", "created": 1679001781, "model": "gpt-35-turbo",
"content": "You are an assistant "usage": { "prompt_tokens": 95,
that teaches people about AI."}, "completion_tokens": 84, "total_tokens":
179},
{"role": "user",
"choices": [
"content": "Does Azure OpenAI
{"message":
support multiple languages?"},
{ "role": "assistant",
{"role": "assistant",
"content": "Yes, other Azure
"content": "Yes, Azure OpenAI Cognitive
supports several languages."}, Services also support
{"role": "user", translation…"},
"content": "Do other Cognitive "finish_reason": "stop",
Services support translation?"} "index": 0}
] ]
} }

14
Using language specific SDKs
Pseudo code structure:

Language specific SDKs are available for use in your <include library>
applications, such as Azure OpenAI’s .NET library and
OpenAI’s Python library. <create client>

Code structure follows a similar pattern for both languages.


<define chat messages and options>

Parameters such as Max Response and Temperature are


<send request>
defined in the chat options.

Both synchronous and asynchronous API versions are <extract response content>
available.

15
What is Prompt Engineering?

Constructing prompts to:

• Maximize relevancy and accuracy of completions


• Specify formatting and style of completions
• Provide conversational context
• Mitigate bias and improve fairness

16
Using a language model “out of the box”

Prompt
Training data
What is there to see and do in London? (General)

London is a vibrant city with a wide range of


attractions and activities. Here are some AI App Language Model
popular things to see and do in London: (LLM/SLM)
• Buckingham Palace: Tour the State Rooms
and witness the Changing of the Guard
ceremony.
• Tower of London: Explore the historic
castle and see the Crown Jewels.

Completion (response)
Using prompt engineering
Prompt

System: You are an AI travel agent that


advises on sightseeing activities in major
cities.
User: Give me ideas for things to see and do Training data
(General)
in London to help me plan a trip there.
Include a numbered list of the top 10 places
to visit in the city.

Certainly! Here’s are the top 10 places to visit AI App Language Model
in London: (LLM/SLM)
1. Buckingham Palace: Tour the State
Rooms and witness the Changing of the
Guard ceremony.
2. Tower of London: Explore the historic
castle and see the Crown Jewels.

Completion (response)
Prompt Engineering

19
Providing clear instructions

Write a product description for a new water bottle Write a product description for a new water bottle that is 100%
recycled. Be sure to include that it comes in natural colors with no
dyes, and each purchase removes 10 pounds of plastic from our
oceans

Introducing the latest addition to our product line - the innovative Introducing our newest water bottle, the eco-friendly choice for the
and eco-friendly water bottle. Made from high-quality, BPA-free conscious consumer. Made from 100% recycled materials and
materials, this bottle is designed to keep your beverages at the available in natural colors with no dyes, this bottle is not only stylish
perfect temperature for hours. Its sleek and stylish design makes it but also sustainable. Each purchase of this bottle removes 10
the perfect accessory for any outdoor activity or workout session. pounds of plastic from our oceans, making it a responsible choice for
With its convenient carrying handle and leak-proof lid, you can take those who care about the environment. Its durable and leak-proof
it with you wherever you go, without worrying about spills or leaks. design makes it perfect for outdoor activities, while its easy-to-carry
Plus, its easy-to-clean design ensures that you can use it again and handle makes it convenient to take on the go. Join us in our mission
again, making it the perfect choice for anyone looking for a reliable to reduce plastic waste and choose this eco-friendly water bottle for
and sustainable water bottle. your daily hydration needs.

20
RAG: Retrieval Augmented Generation
Prompt

System: You are an AI travel agent that


advises on sightseeing activities in major
cities.
Training data
User: Give me ideas for things to see and
(General)
do in London to help me plan a trip there.
Include a numbered list of the top 10
places to visit in the city.

Certainly! Here’s are the top 10 activities AI App Language Model


offered by Margie’s Travel: (LLM/SLM)
1. Tour of the Tower of London1.
2. Open top bus tour of the City of
Westminster2

1. Margie’s London brochure


2. Margie’s city tours pamphlet Contextual data
(Vector index)
Completion (response)
Connect to your data source

• Add your data source in the Chat


playground, under Add your data
• Use an existing data source, or use that
wizard to create a new one
• Once connected, a new chat session will
start. Chat like normal, and see how the
AI model references that data
How Azure OpenAI can use your data

Set up your data source Configure the Azure AI Foundry or your Use the Azure OpenAI model, which now
app to connect to that data source uses your data for grounding

• Use an existing data source, such as an • In the Azure AI Foundry portal, set up • Chat with the AI models like normal
Azure AI Search resource the connection by pointing it to the data • If the data source has relevant
• Use the Azure AI Foundry portal to source information about the prompt, it will
create that data source, if you don’t • In your app, specify the data source in use that data
already have one the prompt parameters • You can specify if the AI model is limited
• When creating the data source, you can • Both configurations allow the search to just your data source
use data already in your account such as resource to augment the prompt
blob storage
GitHub Copilot

24
Objectives
Describe GitHub Copilot account types, benefits, features, and limitations

Install the GitHub Copilot extensions for Visual Studio Code

Describe the code development features of GitHub Copilot in Visual Studio Code

Implement code improvements using GitHub Copilot tools

GitHub Copilot support for Code reliability and performance

25
GitHub Copilot tools, benefits, and limitations
GitHub Copilot Individual includes the following features:
Code completion in IDEs.
Copilot Chat in IDEs.
Copilot Chat in GitHub Mobile.
Copilot in the CLI.
Block suggestions matching public code.
GitHub Copilot Business adds the following features:
Exclude specified files from Copilot.
Organization-wide policy management.
Audit logs.
Increased GitHub Models rate limits.
GitHub Copilot Enterprise adds the following features:
Copilot Chat in GitHub.com.
Copilot Chat skills in IDEs.
Copilot pull request summaries.
Copilot knowledge bases.

26
GitHub Copilot tools, benefits, and limitations

GitHub Copilot Benefits:


GitHub Copilot natively integrates with Visual Studio Code, Visual Studio, and
other developer environments.
GitHub Copilot works with all popular programming languages.
GitHub Copilot dramatically accelerates the overall productivity of
developers.

Limitations:
LLM is trained on a limited scope.
LLM could contain coding biases.
Copilot suggestions could include: Security risks, Public code, Inaccurate
code.

Limitations can be mitigated by following best practices.

27
GitHub Copilot extension: GitHub Copilot Chat extension:
• The GitHub Copilot extension supports auto • The GitHub Copilot Chat extension provides AI-enabled chat
completions in the code editor. features that are focused on developer scenarios.

28
AI assistance features of the GitHub Copilot Chat extension
The GitHub Copilot Chat extension uses developer input to complete specific tasks. The user interface
accepts either commands or natural language prompts to:
Explain and/or document code.
Answer coding questions.
Generate new code (including unit test cases).
Suggest bug fixes.
Suggest improvements to existing code.

GitHub Copilot Chat uses prompts or other input to generate suggestions. To create a prompt:
Write a natural language prompt that describes your intent.
Specify the code or code files within your codebase that provide context for the prompt.

GitHub Copilot Chat implements the following process to generate a response:


Input processing: A prompt is preprocessed before sending it to the large language model (LLM).
Language model analysis: The LLM analyzes the prompt along with the associated code context.
Response generation: The LLM generates code, code suggestions, or explanations.
Output formatting: The response is formatted and presented to the user.

29
AI assistance features of the GitHub Copilot Chat extension
GitHub Copilot Chat adds the following GitHub Copilot Chat’s Inline Chat feature.
interactive features to the Visual Studio
Code user interface:
 Chat View: a panel in the user interface that’s
used to display a chat conversation with the AI
pair programmer.
 Inline Chat: a text dialog that developers can
use to enter prompts inline as they're coding.
 Smart Actions: a command interface used to
complete common tasks without having to write
a prompt.
 Quick Chat: a chat window that enables a quick,
interactive conversation with the AI.

30
AI assistance features of the GitHub Copilot Chat extension
Your prompts for GitHub Copilot Chat communicate two things:
Intent: The natural language text that describes what you want to achieve.
Context: The area of focus associated with the specified intent.

GitHub Copilot Chat recognizes specialized prompt entries that can be used to specify context:
Chat Participants: Help you indicate the scope and intent of your prompt.
Slash Commands: Shortcuts to specific functionality provided by the chat participant.
Chat Variables: Provide domain-specific context.
Basic guidelines for writing prompts:
When writing the intent portion of a prompt: Be specific and keep it simple.
When defining the context of a prompt: Use chat participants, slash commands, and chat variables.

31
Code development features of GitHub Copilot

Code completion suggestions: Chat suggestions:


• Code line completions are triggered as you work in  Chat suggestions are based on existing code and a natural
the code editor. language prompt.
• Code line completions can be single line  Chat suggestions can be generated from prompts entered
completions or multi-line suggestions. in the Chat View, Inline Chat, or Quick Chat.
• Code line completions can be generated as you
type or based on code comments.

32
GitHub Copilot best practices

Consider the following items when using code line completions:


 Provide context with open files, top level comments, and appropriate includes/references.
 Use meaningful function names and comments.
 Prime GitHub Copilot with sample code.
 Be consistent and keep the quality bar high (code quality).

33
GitHub Copilot best practices
Consider the following items when writing prompts:
 Start general, then get specific.
 Give examples.
 Break complex tasks into simpler tasks.
 Avoid ambiguity.
 Indicate relevant code using chat participants, slash commands, and chat variables .
 Experiment and iterate.
 Keep the chat history relevant.
 Follow good coding practices.

Always review code suggestions and only accept suggestions that meet your requirements.

34
Generate code using code line completions

Code line completions are generated when you type:


• GitHub Copilot uses the context of your
application to generate code completion
suggestions as you type.
• You can accept an entire suggestion, or one word
at a time.
• When more than one suggestion is generated, you
can scroll through them.
• You can also open an editor tab with additional
suggestions.

35
Generate code using GitHub Copilot Chat

GitHub Copilot Chat can be used to


complete coding tasks:
 Scaffold an application.
 Generate new code.
 Convert code from one programming language to
another.
 Fix coding issues.
 Create unit tests.
 Suggest improvements to existing code.

36
Generate code using GitHub Copilot Chat

Generate and manage suggestions using the Chat View:


 You can generate code suggestions in the
Chat View using a prompt that includes a
combination of chat participants, slash
commands, and chat variables.
 You can manage code suggestions in the
Chat View interface. Code suggestions in the
response can be copied, inserted into the
code editor, inserted into a new file, or
inserted into the terminal.

37
Generate code using GitHub Copilot Chat

Generate and manage suggestions using the Inline Chat window:


 You can generate code suggestions
using Inline Chat by selecting code
and entering a prompt.
 You can manage code suggestions
in the Inline Chat interface. Code
suggestions can be accepted or
discarded.

38
GitHub Copilot support for code improvements

Requirements for implementing code improvements:


• The knowledge and ability to build effective prompts.
• The ability to evaluate and implement suggested updates.
General categories for code improvements:
• Code quality.
• Code reliability.
• Code performance.
• Code security.

39
Code quality improvements
Factors to consider when investigating code quality improvements:
• Improve code readability.
• Reduce code complexity.
• Ensure code modularity and reusability.
Example prompts for improving code quality:
• "Suggest refactoring updates to enhance the modularity of the User class."
• "Provide design pattern suggestions to improve the internal structure of the OrderService class."
• "Recommend code style enhancements to make the ProductController class more readable."

40
Code reliability and performance
Relationship between reliability and performance:
• Improvements in one area can positively affect the other.
• Deficiencies in one area can lead to problems with the other.
A balance between reliability and performance leads to:
• Applications that meet functional requirements.
• Applications that deliver a seamless and efficient user experience.
Factors to consider when investigating code reliability:
• Potential issues.
• Testing.
• Exception handling.
Factors to consider when investigating code performance:
• Optimize algorithms.
• Avoid unnecessary computations.
• Use multithreading and asynchronous programming.
• Minimize I/O operations.

41
Course Activation Walkthrough

42
How to access the course

Navigate to https://ms-sandbox.cloudlabs.ai and click on Login/Sign Up

43
How to access the course

Select Work or School Account

44
How to access the course

Choose on Redeem Code Tab and enter the code: TDSYNEXXGENAICOPILOTMUM and click on
Redeem.
45
How to access the course

Once you have registered, Click on Go To Courses

46
How to access the course

From the course, Click on Get Started button to access the hands-on labs.

47
Launch the Lab

To launch the lab, Click on Launch button to start the hands-on labs.

48
Thank you!

49

You might also like