Unit-6 Ai Tools-Chatgpt
Unit-6 Ai Tools-Chatgpt
AIM: Using ChatGPT to analyze Prompt Engineering: Experiment with different types of
prompts to see how the model responds. Try asking questions, starting conversations, or even
providing incomplete sentences to see how the model completes them. Ex: Prompt: "You are a
knowledgeable AI. Please answer the following question: What is the capital of France?"
PROCEDURE:
Prompt Engineering :Prompt engineering is the process of structuring text that can be interpreted
and understood by a generative AI model. A prompt is natural language text describing the task
that an AI should perform.
Prompt engineering is an artificial intelligence engineering technique that serves several purposes.
It encompasses the process of refining large language models, or LLMs, with specific prompts and
recommended outputs, as well as the process of refining input to various generative AI services to
generate text or images. As generative AI tools improve, prompt engineering will also be important
in generating other kinds of content, including robotic process automation bots, 3D assets, scripts,
robot instructions and other types of content and digital artifacts.
Types of prompt engineering
In its simplest form, prompt engineering is writing text to feed to an AI model. But several forms
of prompt engineering can impact your success. We cover some types of prompt engineering you’ll
use when working with AI-generative models.
Text completion prompts. Text-based prompts tell the AI to complete a sentence or phrase. For
instance, you can input the text, “The dog ran fast because,” and prompt the language model to
complete the sentence.
Instruction-based prompts. This prompt type uses explicit commands or instructions to help
guide the AI response. For example, you can instruct the AI to act as a user interface (UI) designer
for the rest of the interaction, telling the AI to use language like a UI designer and help the user
deal with their design problems.
Multiple-choice prompts. This prompt helps constrain the output of a language model. Offering
multiple choices and requesting the model to confine itself to a single answer lets you limit the
output and pick the most appropriate response.
Contextual prompts. These prompts provide contextual clues to the language model. This series
of prompts build on each other and guide the model’s decisions and thinking in a specific direction.
Bias mitigation prompts. These prompts help refine the output of an LLM to avoid any bias. Test
different prompts to check for potential biases and make modifications to account for those
problems.
Fine-tuning and interactive prompts. This type of prompting helps you iteratively refine
prompts by looking at output and making wording changes to improve the output and model
performance. Fine-tuning also allows you to train the model to produce better output for a specific
set of prompts.
ChatGPT model
ChatGPT works by attempting to understand your prompt and then spitting out strings of
words that it predicts will best answer your question, based on the data it was trained on. While
that might sound relatively simple, it belies the complexity of what's going on under the ho
Bing model
At its heart, LLM AI
operates on neural
networks. Mimicking the
human brain's structure,
these networks allow Bing
Chat to learn from vast
volumes of data. Based on
each person's interactions
and new online content,
this continuous learning
ensures that the platform
becomes smarter and more
efficient over time.
Bard model
Bard has a share conversation
function and a double check
function that helps users fact-
check generated results. Bard
can also access information
from a number of Google apps
and services, including
YouTube, Maps, Hotels,
Flights, Gmail, Docs and Drive,
letting users apply Bard to their
personal content.
ChatGPT model
ChatGPT works by attempting to understand your prompt and then spitting out strings of words
that it predicts will best answer your question, based on the data it was trained on. While that might
sound relatively simple, it belies the complexity of what's going on under the hood.
Bing model
At its heart, LLM AI operates on neural networks. Mimicking the human brain's structure, these
networks allow Bing Chat to learn from vast volumes of data. Based on each person's interactions
and new online content, this continuous learning ensures that the platform becomes smarter and
more efficient over time.
Bard model
Bard has a share conversation function and a double check function that helps users fact-check
generated results. Bard can also access information from a number of Google apps and services,
including YouTube, Maps, Hotels, Flights, Gmail, Docs and Drive, letting users apply Bard to
their personal content.