0% found this document useful (0 votes)
40 views7 pages

GroqCloud Api Reference

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views7 pages

GroqCloud Api Reference

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Documentation

Chat

Chat

Create chat completion


POST https://api.groq.com/openai/v1/chat/completions

Creates a model response for the given chat conversation.

Request Body
frequency_penalty number or null Optional Defaults to 0
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in
the text so far, decreasing the model's likelihood to repeat the same line verbatim.

function_call Deprecated string / object or null Optional


Deprecated in favor of tool_choice .
Controls which (if any) function is called by the model. none means the model will not call a function and
instead generates a message. auto means the model can pick between generating a message or calling
a function. Specifying a particular function via {"name": "my_function"} forces the model to call that
function.
none is the default when no functions are present. auto is the default if functions are present.
Show possible types

functions Deprecated array or null Optional


Deprecated in favor of tools .
A list of functions the model may generate JSON inputs for.
Show properties

logit_bias object or null Optional Defaults to null


This is not yet supported by any of our models. Modify the likelihood of specified tokens appearing in the
completion.

logprobs boolean or null Optional Defaults to false


This is not yet supported by any of our models. Whether to return log probabilities of the output tokens or
not. If true, returns the log probabilities of each output token returned in the content of message .

max_tokens integer or null Optional


The maximum number of tokens that can be generated in the chat completion. The total length of input
tokens and generated tokens is limited by the model's context length.

messages array Required


A list of messages comprising the conversation so far.
Show possible types

model string Required


ID of the model to use. For details on which models are compatible with the Chat API, see available
models

n integer or null Optional Defaults to 1


How many chat completion choices to generate for each input message. Note that the current moment,
only n=1 is supported. Other values will result in a 400 response.

parallel_tool_calls boolean or null Optional Defaults to true


Whether to enable parallel function calling during tool use.
presence_penalty number or null Optional Defaults to 0
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the
text so far, increasing the model's likelihood to talk about new topics.

response_format object or null Optional


An object specifying the format that the model must output.
Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model
generates is valid JSON.
Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a
system or user message.
Show properties

seed integer or null Optional


If specified, our system will make a best effort to sample deterministically, such that repeated requests
with the same seed and parameters should return the same result. Determinism is not guaranteed, and
you should refer to the system_fingerprint response parameter to monitor changes in the backend.

stop string / array or null Optional Defaults to null


Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain
the stop sequence.
Show possible types

stream boolean or null Optional Defaults to false


If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events as they
become available, with the stream terminated by a data: [DONE] message. Example code.

stream_options object or null Optional Defaults to null


Options for streaming response. Only set this when you set stream: true .
Show properties

temperature number or null Optional Defaults to 1


What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more
random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend
altering this or top_p but not both

tool_choice string / object or null Optional


Controls which (if any) tool is called by the model. none means the model will not call any tool and
instead generates a message. auto means the model can pick between generating a message or calling
one or more tools. required means the model must call one or more tools. Specifying a particular tool
via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
none is the default when no tools are present. auto is the default if tools are present.
Show possible types

tools array or null Optional


A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a
list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
Show properties

top_logprobs integer or null Optional


This is not yet supported by any of our models. An integer between 0 and 20 specifying the number of
most likely tokens to return at each token position, each with an associated log probability. logprobs
must be set to true if this parameter is used.

top_p number or null Optional Defaults to 1


An alternative to sampling with temperature, called nucleus sampling, where the model considers the
results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10%
probability mass are considered. We generally recommend altering this or temperature but not both.

user string or null Optional


A unique identifier representing your end-user, which can help us monitor and detect abuse.

Returns
Returns a chat completion object, or a streamed sequence of chat completion chunk objects if the request
is streamed.
1 // Default
2 import Groq from "groq-sdk";
3
4 const groq = new Groq({ apiKey: process.env.GROQ_API_KEY });
5
6 async function main() {
7 const completion = await groq.chat.completions
8 .create({
9 messages: [
10 {
11 role: "user",
12 content: "Explain the importance of fast language models",
13 },
14 ],
15 model: "mixtral-8x7b-32768",
16 })
17 .then((chatCompletion) => {
18 console.log(chatCompletion.choices[0]?.message?.content || "");
19 });
20 }
21
{
22 main();
"id": "34a9110d-c39d-423b-9ab9-9c748747b204",
"object": "chat.completion",
"created": 1708045122,
"model": "mixtral-8x7b-32768",
"system_fingerprint": null,
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Low latency Large Language Models (LLMs) are important in the field of artificial intelligence and natural la
},
"finish_reason": "stop",
"logprobs": null
}
],
"usage": {
"prompt_tokens": 24,
"completion_tokens": 377,
"total_tokens": 401,
"prompt_time": 0.009,
"completion_time": 0.774,
"total_time": 0.783
}
}

Audio

Create transcription
POST https://api.groq.com/openai/v1/audio/transcriptions

Transcribes audio into the input language.

Request Body
file string Required
The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga,
m4a, ogg, wav, or webm.
language string Optional
The language of the input audio. Supplying the input language in ISO-639-1 format will improve accuracy
and latency.

model string Required


ID of the model to use. Only whisper-large-v3 is currently available.

prompt string Optional


An optional text to guide the model's style or continue a previous audio segment. The prompt should
match the audio language.

response_format string Optional Defaults to json


The format of the transcript output, in one of these options: json , text , or verbose_json .

temperature number Optional Defaults to 0


The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random,
while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log
probability to automatically increase the temperature until certain thresholds are hit.

timestamp_granularities[] array Optional Defaults to segment


The timestamp granularities to populate for this transcription. response_format must be set
verbose_json to use timestamp granularities. Either or both of these options are supported: word , or
segment . Note: There is no additional latency for segment timestamps, but generating word timestamps
incurs additional latency.

Returns
Returns an audio transcription object

1 import fs from "fs";


2 import Groq from "groq-sdk";
3
4 const groq = new Groq();
5 async function main() {
6 const transcription = await groq.audio.transcriptions.create({
7 file: fs.createReadStream("sample_audio.m4a"),
8 model: "whisper-large-v3",
9 prompt: "Specify context or spelling", // Optional
10 response_format: "json", // Optional
11 language: "en", // Optional
12 temperature: 0.0, // Optional
13 });
14 console.log(transcription.text);
15 }
16 main();

{
"text": "Your transcribed text appears here...",
"x_groq": {
"id": "req_unique_id"
}
}

Create translation
POST https://api.groq.com/openai/v1/audio/translations
Translates audio into English.

Request Body
file string Required
The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a,
ogg, wav, or webm.

model string Required


ID of the model to use. Only whisper-large-v3 is currently available.

prompt string Optional


An optional text to guide the model's style or continue a previous audio segment. The prompt should be in
English.

response_format string Optional Defaults to json


The format of the transcript output, in one of these options: json , text , or verbose_json .

temperature number Optional Defaults to 0


The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random,
while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log
probability to automatically increase the temperature until certain thresholds are hit.

Returns
Returns an audio translation object

1 // Default
2 import fs from "fs";
3 import Groq from "groq-sdk";
4
5 const groq = new Groq();
6 async function main() {
7 const translation = await groq.audio.translations.create({
8 file: fs.createReadStream("sample_audio.m4a"),
9 model: "whisper-large-v3",
10 prompt: "Specify context or spelling", // Optional
11 response_format: "json", // Optional
12 temperature: 0.0, // Optional
13 });
14 console.log(translation.text);
15 }
16 main();

{
"text": "Your translated text appears here...",
"x_groq": {
"id": "req_unique_id"
}
}

Models

List models
GET https://api.groq.com/openai/v1/models

List models

Returns
A list of models

{
"object": "list",
"data": [
{
"id": "gemma-7b-it",
"object": "model",
"created": 1693721698,
"owned_by": "Google",
"active": true,
"context_window": 8192
},
{
"id": "llama2-70b-4096",
"object": "model",
"created": 1693721698,
"owned_by": "Meta",
"active": true,
"context_window": 4096
},
{
"id": "mixtral-8x7b-32768",
"object": "model",
"created": 1693721698,
"owned_by": "Mistral AI",
"active": true,
"context_window": 32768
}
]
}

Retrieve model
GET https://api.groq.com/openai/v1/models/{model}

Get model

Returns
A model object

{
"id": "llama2-70b-4096",
"object": "model",
"created": 1693721698,
"owned_by": "Meta",
"active": true,
"context_window": 4096
}

You might also like