Generating content

Firebase Genkit provides an easy interface for generating content with LLMs.

Models

Models in Firebase Genkit are libraries and abstractions that provide access to various Google and non-Google LLMs.

Models are fully instrumented for observability and come with tooling integrations provided by the Genkit Developer UI -- you can try any model using the model runner.

When working with models in Genkit, you first need to configure the model you want to work with. Model configuration is performed by the plugin system. In this example you are configuring the Vertex AI plugin, which provides Gemini models.

import {
	"github.com/firebase/genkit/go/ai"
	"github.com/firebase/genkit/go/plugins/vertexai"
}
// Default to the value of GCLOUD_PROJECT for the project,
// and "us-central1" for the location.
// To specify these values directly, pass a vertexai.Config value to Init.
if err := vertexai.Init(ctx, nil); err != nil {
	return err
}

To use models provided by the plugin, you need a reference to the specific model and version:

model := vertexai.Model("gemini-1.5-flash")

Supported models

Genkit provides model support through its plugin system. The following plugins are officially supported:

Plugin Models
Google Generative AI Gemini Pro, Gemini Pro Vision
Google Vertex AI Gemini Pro, Gemini Pro Vision, Gemini 1.5 Flash, Gemini 1.5 Pro, Imagen2
Ollama Many local models, including Gemma, Llama 2, Mistral, and more

See the docs for each plugin for setup and usage information.

How to generate content

Genkit provides a simple helper function for generating content with models.

To just call the model:

responseText, err := ai.GenerateText(ctx, model, ai.WithTextPrompt("Tell me a joke."))
if err != nil {
	return err
}
fmt.Println(responseText)

You can pass options along with the model call. The options that are supported depend on the model and its API.

response, err := ai.Generate(ctx, model,
	ai.WithTextPrompt("Tell me a joke about dogs."),
	ai.WithConfig(ai.GenerationCommonConfig{
		Temperature:     1.67,
		StopSequences:   []string{"cat"},
		MaxOutputTokens: 3,
	}))

Streaming responses

Genkit supports chunked streaming of model responses. To use chunked streaming, pass a callback function to Generate():

response, err := ai.Generate(ctx, gemini15pro,
	ai.WithTextPrompt("Tell a long story about robots and ninjas."),
	// stream callback
	ai.WithStreaming(
		func(ctx context.Context, grc *ai.GenerateResponseChunk) error {
			fmt.Printf("Chunk: %s\n", grc.Text())
			return nil
		}))
if err != nil {
	return err
}

// You can also still get the full response.
fmt.Println(response.Text())

Multimodal input

If the model supports multimodal input, you can pass image prompts:

imageBytes, err := os.ReadFile("img.jpg")
if err != nil {
	return err
}
encodedImage := base64.StdEncoding.EncodeToString(imageBytes)

resp, err := ai.Generate(ctx, gemini15pro, ai.WithMessages(
	ai.NewUserMessage(
		ai.NewTextPart("Describe the following image."),
		ai.NewMediaPart("", "data:image/jpeg;base64,"+encodedImage))))

The exact format of the image prompt (https URL, gs URL, data URI) is model-dependent.

Function calling (tools)

Genkit models provide an interface for function calling, for models that support it.

myJokeTool := ai.DefineTool(
	"myJoke",
	"useful when you need a joke to tell",
	func(ctx context.Context, input *any) (string, error) {
		return "haha Just kidding no joke! got you", nil
	},
)

response, err := ai.Generate(ctx, gemini15pro,
	ai.WithTextPrompt("Tell me a joke."),
	ai.WithTools(myJokeTool))

This will automatically call the tools in order to fulfill the user prompt.

Recording message history

Genkit models support maintaining a history of the messages sent to the model and its responses, which you can use to build interactive experiences, such as chatbots.

In the first prompt of a session, the "history" is simply the user prompt:

history := []*ai.Message{{
	Content: []*ai.Part{ai.NewTextPart(prompt)},
	Role:    ai.RoleUser,
}}

response, err := ai.Generate(context.Background(), gemini15pro, ai.WithMessages(history...))

When you get a response, add it to the history:

history = append(history, response.Candidates[0].Message)

You can serialize this history and persist it in a database or session storage. For subsequent user prompts, add them to the history before calling Generate():

history = append(history, &ai.Message{
	Content: []*ai.Part{ai.NewTextPart(prompt)},
	Role:    ai.RoleUser,
})

response, err = ai.Generate(ctx, gemini15pro, ai.WithMessages(history...))

If the model you're using supports the system role, you can use the initial history to set the system message:

history = []*ai.Message{{
	Content: []*ai.Part{ai.NewTextPart("Talk like a pirate.")},
	Role:    ai.RoleSystem,
}}