0% found this document useful (0 votes)
127 views

Index

Uploaded by

pedroivo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
127 views

Index

Uploaded by

pedroivo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

FlowiseAI Ask or search… ⌘ K

Welcome to Flowise O N TH IS PAG E

Getting Started API Prediction API

Image Uploads
Contributing
Speech to Text
Using Flowise
Vector Upsert API
API
Prediction API
Document Loaders with Upload

Streaming POST /api/v1/prediction/{your-chatflowid} Document Loaders without Uplo…

Message API
Embed
Request Body Tutorials
Variables

Analytic Key Description

Telemetry

question User's question


Configuration

Integrations overrideConfig Override existing flow configuration

Migration Guide Provide list of history messages to the flow. Only works when using
history
Short Term Memory
Use Cases

You can use the chatflow as API and connect to frontend applications.

Powered By GitBook

You also have the flexibility to override input configuration with overrideConfig property.

Python Javascript

import requests

API_URL = "http://localhost:3000/api/v1/prediction/<chatlfowid>"

def query(payload):
response = requests.post(API_URL, json=payload)
return response.json()

output = query({
"question": "Hey, how are you?",
"overrideConfig": {
"returnSourceDocuments": true
},
"history": [
{
"message": "Hello, how can I assist you?",
"type": "apiMessage"
},
{
"type": "userMessage",
"message": "Hello I am Bob"
},
{
"type": "apiMessage",
"message": "Hello Bob! how can I assist you?"
}
]
})

Image Uploads
When Allow Image Upload is enabled, images can be uploaded from chat interface.

Python Javascript

import requests

API_URL = "http://localhost:3000/api/v1/prediction/<chatlfowid>"

def query(payload):
response = requests.post(API_URL, json=payload)
return response.json()

output = query({
"question": "Hey, how are you?",
"uploads": [
{
"data": 'data:image/png;base64,iVBORw0KGgdM2uN0', #base64 string
"type": 'file',
"name": 'Flowise.png',
"mime": 'image/png'
}
]
})

Speech to Text
When Speech to Text is enabled, users can speak directly into microphone and speech will be
transcribed into text.

Python Javascript

import requests

API_URL = "http://localhost:3000/api/v1/prediction/<chatlfowid>"

def query(payload):
response = requests.post(API_URL, json=payload)
return response.json()

output = query({
"question": "Hey, how are you?",
"uploads": [
{
"data": 'data:audio/webm;codecs=opus;base64,GkXf', #base64 string
"type": 'audio',
"name": 'audio.wav',
"mime": 'audio/webm'
}
]
})

Vector Upsert API

POST /api/v1/vector/upsert/{your-chatflowid}

Request Body

Key Description

overrideConfig Override existing flow configuration

Node ID of the vector store. When you have multiple vector stores in
a flow, you might not want to upsert all of them. Specifying
stopNodeId
stopNodeId will ensure only that specific vector store node is
upserted.

Document Loaders with Upload


Some document loaders in Flowise allow user to upload files:

If the flow contains Document Loaders with Upload File functionality, the API looks slightly
different. Instead of passing body as JSON, form-data is being used. This allows you to upload any
files to the API.

It is user's responsibility to make sure the file type is compatible with the expected file
type from document loader. For example, if a Text File Loader is being used, you should
only upload file with .txt extension.

Python Javascript

import requests

API_URL = "http://localhost:3000/api/v1/vector/upsert/<chatlfowid>"

# use form data to upload files


form_data = {
"files": ('state_of_the_union.txt', open('state_of_the_union.txt', 'rb'))
}

body_data = {
"returnSourceDocuments": True
}

def query(form_data):
response = requests.post(API_URL, files=form_data, data=body_data)
print(response)
return response.json()

output = query(form_data)
print(output)

Document Loaders without Upload


For other Document Loaders nodes without Upload File functionality, the API body is in JSON
format similar to Prediction API.

Python Javascript

import requests

API_URL = "http://localhost:3000/api/v1/vector/upsert/<chatlfowid>"

def query(form_data):
response = requests.post(API_URL, json=payload)
print(response)
return response.json()

output = query({
"overrideConfig": { # optional
"returnSourceDocuments": true
}
})
print(output)

Message API

GET /api/v1/chatmessage/{your-chatflowid}

DELETE /api/v1/chatmessage/{your-chatflowid}

Query Parameters

Param Type Value

sessionId string

sort enum ASC or DESC

startDate string

endDate string

Tutorials

How to use Flowise API

How to use Flowise API and connect to Bubble

Previous Next
Using Flowise Streaming

Last modified 19d ago


FlowiseAI Ask or search… ⌘ K

Welcome to Flowise

Getting Started Streaming


Contributing

Using Flowise
Flowise supports streaming back to your front end application when the final node is a Chain or
API OpenAI Function Agent.

Streaming

Embed

Variables

Analytic

Telemetry

Configuration

Integrations

Migration Guide

Use Cases

Powered By GitBook

1. Install socket.io-client to your front-end application

Yarn Npm Pnpm

yarn add socket.io-client

Refer official docs for more installation options.

2. Import it

import socketIOClient from 'socket.io-client'

3. Establish connection

const socket = socketIOClient("http://localhost:3000") //flowise url

4. Listen to connection

import { useState } from 'react'

const [socketIOClientId, setSocketIOClientId] = useState('');

socket.on('connect', () => {
setSocketIOClientId(socket.id)
});

4. Send query with socketIOClientId

async function query(data) {


const response = await fetch(
"http://localhost:3000/api/v1/prediction/<chatflow-id>",
{
method: "POST",
body: data
}
);
const result = await response.json();
return result;
}

query({
"question": "Hey, how are you?",
"socketIOClientId": socketIOClientId
}).then((response) => {
console.log(response);
});

5. Listen to token stream

socket.on('start', () => {
console.log('start');
});

socket.on('token', (token) => {


console.log('token:', token);
});

socket.on('sourceDocuments', (sourceDocuments) => {


console.log('sourceDocuments:', sourceDocuments);
});

socket.on('end', () => {
console.log('end');
});

6. Disconnect connection

socket.disconnect();

Previous Next
API Embed

Last modified 15h ago


FlowiseAI Ask or search… ⌘ K

Welcome to Flowise

Getting Started Embed


Contributing

Using Flowise
You can embed a chat widget to your website. Simply copy paste the embedded code provided to
API anywhere in the <body> tag of your html file.

Streaming

Embed

Variables

Analytic

Telemetry

Configuration

Integrations

Migration Guide

Use Cases

Watch how to do that:

Powered By GitBook

You can also customize your own embedded chat widget UI and pass chatflowConfig JSON object
to override existing config. See configuration list.

To modify the full source code of embedded chat widget, follow these steps:

1. Fork the Flowise Chat Embed repository

2. Then you can make any code changes. One of the popular ask is to remove Flowise branding.

3. Run pnpm build

4. Push changes to the forked repo

5. You can then use it as embedded chat like so:

Replace username to your Github username, and forked-repo to your forked repo.

<script type="module">
import Chatbot from "https://cdn.jsdelivr.net/gh/username/forked-repo/dist/web.js"
Chatbot.init({
chatflowid: "chatflow-id",
apiHost: "http://localhost:3000",
})
</script>

<script type="module">
import Chatbot from "https://cdn.jsdelivr.net/gh/HenryHengZJ/FlowiseChatEmbed-Test/dist/web.js"
Chatbot.init({
chatflowid: "chatflow-id",
apiHost: "http://localhost:3000",
})
</script>

Tutorials

Watch how to embed Flowise in a Bootstrap 5 website

Watch how to add chatbot to website

Previous Next
Streaming Variables

Last modified 15h ago


FlowiseAI Ask or search… ⌘ K

Welcome to Flowise O N TH IS PAG E

Getting Started Variables Static

Runtime
Contributing
Resources
Using Flowise
Flowise allow users to create variables that can be used in Custom Tool Function.
API
For example, you have a database URL that you do not want it to be exposed on the function, but
Streaming you still want the function to be able to read the URL from your environment variable.

Embed User can create a variable and get the variable in Custom Tool Function:

Variables $vars.<variable-name>
Analytic
Variables can be Static or Runtime.
Telemetry

Static
Configuration

Integrations Static variable will be saved with the value specified, and retrieved as it is.

Migration Guide

Use Cases

Runtime
Value of the variable will be fetched from .env file using process.env

Powered By GitBook

Resources

Pass Variables to Function

Previous Next
Embed Analytic

Last modified 2mo ago


FlowiseAI Ask or search… ⌘ K

Welcome to Flowise

Getting Started Auth


Contributing

Using Flowise
App level
Configuration
Chatflow level
Auth

App Level
Previous Next

Chatflow Level Configuration App Level

Deployment

Environment Variables
Last modified 2mo ago

Databases

Rate Limit

Integrations

Migration Guide

Use Cases

Powered By GitBook
FlowiseAI Ask or search… ⌘ K

Welcome to Flowise O N TH IS PAG E

Getting Started Environment Variables Database Env Variables

LangSmith Tracing
Contributing
Built-In and External Dependencies
Using Flowise
Flowise support different environment variables to configure your instance. You can specify the Debug and Logs
Configuration following variables in the .env file inside packages/server folder. Refer to .env.example file.
Credential

Auth NPM
Variable Description Type Default
Docker
Deployment
The HTTP port Render
Environment Variables PORT Number 3000
Flowise runs on
Railway
Databases
FLOWISE_USERNAME Username to login String
Rate Limit
FLOWISE_PASSWORD Password to login String
Integrations
Print logs onto
Migration Guide DEBUG Boolean
terminal/console
Use Cases
Location where your-home-
BLOB_STORAGE_PATH uploaded files are String dir/.flowise/stor
stored age

Location where API Flowise/packages


APIKEY_PATH String
keys are saved /server

Location where
encryption key (used Flowise/packages
SECRETKEY_PATH String
to encrypt/decrypt /server
credentials) is saved

Encryption key to be
FLOWISE_SECRETKEY_O used instead of the
String
VERWRITE key stored in
SECRETKEY_PATH

Location where log


LOG_PATH String Flowise/logs
files are stored

Enum String:
Powered By GitBook Different log levels for info ,
LOG_LEVEL info
loggers to be saved verbose ,
debug

NodeJS built-in
TOOL_FUNCTION_BUILT
modules to be used String
IN_DEP
for Tool Function

External modules to
TOOL_FUNCTION_EXTE
be used for Tool String
RNAL_DEP
Function

NUMBER_OF_PROXIES Rate Limit Proxy Number

The allowed origins


CORS_ORIGINS for all cross-origin String
HTTP calls

The allowed origins


IFRAME_ORIGINS for iframe src String
embedding

Database Env Variables

Variable Description Type Default

Enum String:
Type of database to sqlite ,
DATABASE_TYPE sqlite
store the flowise data mysql ,
postgres

Location where
database is saved
your-home-
DATABASE_PATH (When String
dir/.flowise
DATABASE_TYPE is
sqlite)

Host URL or IP
address (When
DATABASE_HOST String
DATABASE_TYPE is
not sqlite)

Database port (When


DATABASE_PORT DATABASE_TYPE is String
not sqlite)

Database username
(When
DATABASE_USER String
DATABASE_TYPE is
not sqlite)

Database password
(When
DATABASE_PASSWORD String
DATABASE_TYPE is
not sqlite)

Database name
(When
DATABASE_NAME String
DATABASE_TYPE is
not sqlite)

LangSmith Tracing
Flowise supports LangSmith tracing with the following env variables:

Variable Description Type

Turn LangSmith tracing ON Enum String: true ,


LANGCHAIN_TRACING_V2
or OFF false

LANGCHAIN_ENDPOINT LangSmith endpoint String

LANGCHAIN_API_KEY LangSmith API Key String

Project to trace on
LANGCHAIN_PROJECT String
LangSmith

Watch how connect Flowise and LangSmith

Built-In and External Dependencies


For security reasons, by default Tool Function only allow certain dependencies. It's possible to lift
that restriction for built-in and external modules by setting the following environment variables:

TOOL_FUNCTION_BUILTIN_DEP : For built-in modules

TOOL_FUNCTION_EXTERNAL_DEP : For external modules sourced from Flowise/node_modules


directory

# Allows usage of all builtin modules


export TOOL_FUNCTION_BUILTIN_DEP=*

# Allows usage of only fs


export TOOL_FUNCTION_BUILTIN_DEP=fs

# Allows usage of only crypto and fs


export TOOL_FUNCTION_BUILTIN_DEP=crypto,fs

# Allow usage of external npm modules.


export TOOL_FUNCTION_EXTERNAL_DEP=axios,moment

Debug and Logs

DEBUG : if set to true, will print logs to terminal/console:

LOG_LEVEL : Different log levels for loggers to be saved. Can be error , info , verbose , or
debug. By default it is set to info, only logger.info will be saved to the log files. If you
want to have complete details, set to debug .

server-requests.log.jsonl - logs every request sent to Flowise

server.log - logs general actions on Flowise

server-error.log - logs error with stack trace

Credential
Flowise store your third party API keys as encrypted credentials using an encryption key.

By default, a random encryption key will be generated when starting up the application and stored
under a file path. This encryption key is then retrieved everytime to decrypt the credentials used
within a chatflow. For example, your OpenAI API key, Pinecone API key, etc.

SECRETKEY_PATH : Where the encryption key is being stored

FLOWISE_SECRETKEY_OVERWRITE : Overwrite the encryption key stored in SECRETKEY_PATH

For some reasons, sometimes encryption key might be re-generated or the stored path was
changed, this will cause errors like - Credentials could not be decrypted. To avoid this, you can set
your own encryption key as FLOWISE_SECRETKEY_OVERWRITE , so that the same encryption key
will be used everytime. There is no restriction on the format, you can set it as any text that you
want, or the same as your FLOWISE_PASSWORD .

Credential API Key returned from the UI is not the same length as your original Api Key
that you have set. This is a fake prefix string that prevents network spoofing, that's why
we are not returning the Api Key back to UI. However, the correct Api Key will be
retrieved and used during your interaction with the chatflow.

NPM
You can set all these variables when running Flowise using npx. For example:

npx flowise start --PORT=3000 --DEBUG=true

Docker
You can set all these variables in the .env file inside docker folder. Refer to .env.example file.

Render

Railway

Previous Next
Sealos Databases

Last modified 19d ago


FlowiseAI Ask or search… ⌘ K

Welcome to Flowise O N TH IS PAG E

Getting Started Databases SQLite

MySQL
Contributing
PostgreSQL
Using Flowise
Flowise supports 3 database types: Synchronize in Production
Configuration
Tutorial: How to use Flowise dat…
SQLite
Auth
MySQL
Deployment
PostgreSQL
Environment Variables
SQLite will be the default database. These databases can be configured with following env
Databases
variables:
Rate Limit

SQLite
Integrations

Migration Guide DATABASE_TYPE=sqlite


DATABASE_PATH=/root/.flowise #your preferred location
Use Cases

A database.sqlite file will be created and saved in the path specified by DATABASE_PATH . If
not specified, the default store path will be in your home directory → .flowise

MySQL

DATABASE_TYPE=mysql
DATABASE_PORT=3306
DATABASE_HOST=localhost
DATABASE_NAME=flowise
DATABASE_USER=user
DATABASE_PASSWORD=123

PostgreSQL

DATABASE_TYPE=postgres
DATABASE_PORT=5432
DATABASE_HOST=localhost
Powered By GitBook DATABASE_NAME=flowise
DATABASE_USER=user
DATABASE_PASSWORD=123

If none of the env variables is specified, SQLite will be the fallback database choice.

Synchronize in Production
Flowise uses Typeorm to configure database connection. By default, synchronize is set to true.
This indicates if database schema should be auto created on every application launch.

However, we have to be careful with this option and don't use this in production - otherwise you
can lose production data. This option is useful during debug and development.

To override the value, set the following env variable

OVERRIDE_DATABASE=false

Tutorial: How to use Flowise databases SQLite and MySQL/MariaDB

Previous Next
Environment Variables Rate Limit

Last modified 2mo ago


FlowiseAI Ask or search… ⌘ K

Welcome to Flowise

Getting Started Rate Limit


Contributing

Using Flowise
When you share your chatflow to public with no API authorization through API or embedded chat,
Configuration anybody can access the flow. To prevent spamming, you can set the rate limit on your chatflow.

Auth

Deployment

Environment Variables

Databases

Rate Limit

Integrations

Migration Guide

Use Cases

Message Limit per Duration: How many messages can be received in a specific duration. Ex:
20

Duration in Seconds: The specified duration. Ex: 60

Limit Message: What message to return when the limit is exceeded. Ex: Quota Exceeded

Using the example above, that means only 20 messages are allowed to be received in 60 seconds.
The rate limitation is tracked by IP-address. If you have deployed Flowise on cloud service, you'll
have to set NUMBER_OF_PROXIES env variable.

Powered By GitBook

Cloud-Hosted Rate Limit Setup Guide

1. Cloud Host Flowise: Start by hosting Flowise in the cloud.

2. Set Environment Variable: Create an environment variable named NUMBER_OF_PROXIES and


set its value to 0 in your hosting environment.

3. Restart Cloud-Hosted Flowise Service: This enables Flowise to apply changes of environment
variables.

4. Check IP Address: To verify the IP address, access the following URL:


{{hosted_url}}/api/v1/ip . You can do this either by entering the URL into your web
browser or by making an API request.

5. Compare IP Address After making the request, compare the IP address returned to your
current IP address. You can find your current IP address by visiting either of these websites:

http://ip.nfriedly.com/

https://api.ipify.org/

6. Incorrect IP Address: If the returned IP address does not match your current IP address,
increase NUMBER_OF_PROXIES by 1 and restart Cloud-Hosted Flowise. Repeat this process
until the IP address matches your own.

Previous Next
Databases Integrations

Last modified 1mo ago


FlowiseAI Ask or search… ⌘ K

Welcome to Flowise

Getting Started LlamaIndex


Contributing

Using Flowise
LlamaIndex is a data framework for LLM applications to ingest, structure, and access private or
Configuration domain-specific data. It has advanced retrieval techniques for designing RAG (Retrieval
Augmented Generation) apps.
Integrations

Langchain
Previous Next
LlamaIndex Vectara Response Synthesizer

Response Synthesizer

Engine
Last modified 1mo ago
Tools

Utilities

External Integrations

Migration Guide

Use Cases

Powered By GitBook
FlowiseAI Ask or search… ⌘ K

Welcome to Flowise

Getting Started Utilities


Contributing

Using Flowise Here are the articles in this section:

Configuration

Integrations Set/Get Variable If Else

Langchain

LlamaIndex Previous Next


Query Engine Tool Set/Get Variable
Utilities

Set/Get Variable

If Else Last modified 2mo ago

External Integrations

Migration Guide

Use Cases

Powered By GitBook
FlowiseAI Ask or search… ⌘ K

Welcome to Flowise O N TH IS PAG E

Getting Started Zapier Zaps Prerequisite

Setup
Contributing
Receive Trigger Message
Using Flowise
Filter out Zapier Bot's Message
Configuration Prerequisite
FlowiseAI generate Result Mess…
Integrations Send Result Message
1. Log in or sign up to Zapier
Langchain 2. Refer deployment to create a cloud hosted version of Flowise.

LlamaIndex

Utilities Setup
External Integrations
1. Go to Zapier Zaps
Zapier Zaps
2. Click Create

Migration Guide

Use Cases

Receive Trigger Message

1. Click or Search for Discord

Powered By GitBook

2. Select New Message Posted to Channel as Event then click Continue

3. Sign in your Discord account

4. Add Zapier Bot to your prefered server

5. Give appropriate permissions and click Authorize then click Continue

6. Select your prefered channel to interact with Zapier Bot then click Continue

7. Send a message to your selected channel on step 8

8. Click Test trigger

9. Select your message then click Continue with the selected record

Filter out Zapier Bot's Message

1. Click or search for Filter

2. Configure Filter to not continue if received message from Zapier Bot then click Continue

FlowiseAI generate Result Message

1. Click +, click or search for FlowiseAI

2. Select Make Prediction as Event, then click Continue

3. Click Sign in and insert your details, then click Yes, Continue to FlowiseAI

4. Select Content from Discord and your Flow ID, then click Continue

5. Click Test action and wait for your result

Send Result Message

1. Click +, click or search for Discord

2. Select Send Channel Message as Event, then click Continue

3. Select the Discord's account that you signed in, then click Continue

4. Select your prefered Channel for channel and select Text and String Source (if available) from
FlowiseAI for Message Text, then click Continue

5. Click Test action

6. Voila you should see the message arrived in your Discord Channel

7. Lastly, rename your Zap and publish it

Previous Next
External Integrations Migration Guide

Last modified 8mo ago


FlowiseAI Ask or search… ⌘ K

Welcome to Flowise O N TH IS PAG E

Getting Started Web Scrape QnA Upsert

Crawl Multiple Pages


Contributing
Manage Links (Optional)
Using Flowise
Let's say you have a website (could be a store, an ecommerce site, a blog), and you want to scrap Query
Configuration all the relative links of that website and have LLM answer any question on your website. In this
Additional Web Scraping
tutorial, we are going to go through how to achieve that.
Integrations
You can find the example flow called - WebPage QnA from the marketplace templates.
Migration Guide

Use Cases
Upsert
Web Scrape QnA
We are going to use Cheerio Web Scraper node to scrape links from a given URL.
Multiple Documents QnA
HtmlToMarkdown Text Splitter to split the scraped content into smaller pieces.
SQL QnA

Webhook Tool

If you do not specify anything, by default only the given URL page will be scraped. If you want to
crawl the rest of relative links, click Additional Parameters of Cheerio Web Scraper.
Powered By GitBook

Crawl Multiple Pages

1. Select Web Crawl or Scrape XML Sitemap in Get Relative Links Method.

2. Input 0 in Get Relative Links Limit to retrieve all links available from the provided URL.

Manage Links (Optional)

1. Input desired URL to be crawled.

2. Click Fetch Links to retrieve links based on the inputs of the Get Relative Links Method and
Get Relative Links Limit in Additional Parameters.

3. In Crawled Links section, remove unwanted links by clicking Red Trash Bin Icon.

4. Lastly, click Save.

On the top right corner, you will notice a green button:

A dialog will be shown that allow users to upsert data to Pinecone:

Under the hood, following actions will be executed:

1. Scraped all HTML data using Cheerio Web Scraper

2. Convert all scraped data from HTML to Markdown, then split it

3. Splitted data will be looped over, and converted to vector embeddings using OpenAI
Embeddings

4. Vector embeddings will be upserted to Pinecone

Navigate to Pinecone dashboard, you will be able to see new vectors being added.

Query
Querying is relatively straight-forward. After you have verified that data is upserted to vector
database, you can start asking question in the chat:

In the Additional Parameters of Conversational Retrieval QA Chain, you can specify 2 prompts:

Rephrase Prompt: Used to rephrase the question given the past conversation history

Response Prompt: Using the rephrased question, retrieve the context from vector database,
and return a final response

It is recommended to specify a detailed response prompt message. For example, you can
specify the name of AI, the language to answer, the response when answer its not found
(to prevent hallucination).

You can also turn on the Return Source Documents option to return a list of document chunks
where the AI's response is coming from.

Additional Web Scraping


Apart from Cheerio Web Scraper, there are other nodes that can perform web scraping as well:

Puppeteer: Puppeteer is a Node.js library that provides a high-level API for controlling headless
Chrome or Chromium. You can use Puppeteer to automate web page interactions, including
extracting data from dynamic web pages that require JavaScript to render.

Playwright: Playwright is a Node.js library that provides a high-level API for controlling multiple
browser engines, including Chromium, Firefox, and WebKit. You can use Playwright to automate
web page interactions, including extracting data from dynamic web pages that require
JavaScript to render.

Apify: Apify is a cloud platform for web scraping and data extraction, which provides an
ecosystem of more than a thousand ready-made apps called Actors for various web scraping,
crawling, and data extraction use cases.

The same logic can be applied to any document use cases, not just limited to web
scraping!

If you have any suggestion on how to improve the performance, we'd love your contribution!

Previous Next
Use Cases Multiple Documents QnA

Last modified 20d ago


FlowiseAI Ask or search… ⌘ K

Welcome to Flowise O N TH IS PAG E

Getting Started Multiple Documents QnA Upsert

Query
Contributing
Agent
Using Flowise
From the last Web Scrape QnA example, we are only upserting and querying 1 website. What if we Conclusion
Configuration have multiple websites, or multiple documents? Let's take a look and see how we can achieve that.

Integrations In this example, we are going to perform QnA on 2 PDFs, which are FORM-10K of APPLE and
TESLA.
Migration Guide

Use Cases

Web Scrape QnA

Multiple Documents QnA

SQL QnA

Webhook Tool

Upsert

1. Fnd the example flow called - Conversational Retrieval QA Chain from the marketplace
templates.

2. We are going to use PDF File Loader, and upload the respective files:

Powered By GitBook

3. Click the Additional Parameters of PDF File Loader, and specify metadata object. For
instance, PDF File with Apple FORM-10K uploaded can have a metadata object {source:
apple} , whereas PDF File with Tesla FORM-10K uploaded can have {source: tesla} . This
is done to seggregate the documents during retrieval time.

4. After filling in the credentials for Pinecone, click Upsert:

5. Navigate to Pinecone dashboard, you will be able to see new vectors being added.

Query

1. After verifying data has been upserted to Pinecone, we can now start asking question in the
chat!

2. However, the context retrieved used to return the answer is a mix of both APPLE and TESLA
documents. As you can see from the Source Documents:

3. We can fix this by specifying a metadata filter from the Pinecone node. For example, if we only
want to retrieve context from APPLE FORM-10K, we can look back at the metadata we have
specified earlier in the Upsert step, then use the same in the Metadata Filter below:

4. Let's ask the same question again, we should now see all context retrieved are indeed from
APPLE FORM-10K:

Each vector databse provider has different format of filtering syntax, recommend to read
through the respective vector database documentation

5. However, the problem with this is that metadata filtering is sort of "hard-coded". Ideally, we
should let the LLM to decide which document to retrieve based on the question.

Agent
We can solve the "hard-coded" metadata filter problem by using Function Calling Agent.

By providing tools to agent, we can let the agent to decide which tool is suitable to be used
depending on the question.

1. Create a Retriever Tool with following name and description:

Name: search_apple

Description: Use this function to answer user questions about Apple Inc (APPL). It contains a
SEC Form 10K filing describing the financials of Apple Inc (APPL) for the 2022 time period.

2. Connect to Pinecone node with metadata filter {source: apple}

3. Repeat the same for tesla.

Name: search_tsla

Description: Use this function to answer user questions about Tesla Inc (TSLA). It contains a
SEC Form 10K filing describing the financials of Tesla Inc (TSLA) for the 2022 time period.

Pinecone Metadata Filter: {source: tesla}

It is important to specify a clear and concise description. This allows LLM to better
decide when to use which tool

4. Now, we need to create a general instruction to OpenAI Function Agent. Click Additional
Parameters of the node, and specify the System Message. For example:

You are an expert financial analyst that always answers questions with the most relevant information using the tools at your disposal.
These tools have information regarding companies that the user has expressed interest in.
Here are some guidelines that you must follow:
* For financial questions, you must use the tools to find the answer and then write a response.
* Even if it seems like your tools won't be able to answer the question, you must still use them to find the most relevant information and insights. Not using
* You may assume that the users financial questions are related to the documents they've selected.
* For any user message that isn't related to financial analysis, respectfully decline to respond and suggest that the user ask a relevant question.
* If your tools are unable to find an answer, you should say that you haven't found an answer but still relay any useful information the tools found.
* Dont ask clarifying questions, just return answer.

The tools at your disposal have access to the following SEC documents that the user has selected to discuss with you:
- Apple Inc (APPL) FORM 10K 2022
- Tesla Inc (TSLA) FORM 10K 2022

The current date is: 2024-01-28

5. Save the Chatflow, and start asking question!

6. Follow up with Tesla:

7. We are now able to ask question about any documents that we've previously upserted to
vector database without "hard-coding" the metadata filtering by using tools + agent.

Conclusion
We've covered using Conversational Retrieval QA Chain and its limitation when querying multiple
documents. And we were able to overcome the issue by using OpenAI Function Agent + Tools. You
can find the template below:

Agent Chatflow.json 39KB


Code

Previous Next
Web Scrape QnA SQL QnA

Last modified 19d ago


FlowiseAI Ask or search… ⌘ K

Welcome to Flowise O N TH IS PAG E

Getting Started SQL QnA TL;DR

1) SQL Database Schema + Exam…


Contributing
2) Return a SQL query with few s…
Using Flowise
Unlike previous examples like Web Scrape QnA and Multiple Documents QnA, querying structured 3) Validate the SQL query using If…
Configuration data does not require vector database. On the high-level, this can be achieved with following
4) Custom function to execute S…
steps:
Integrations 5) Return a natural response from…

Migration Guide 1. Providing LLM: Query

overview of SQL database schema Conclusion


Use Cases
example rows data
Web Scrape QnA
2. Return a SQL query with few shot prompting
Multiple Documents QnA
3. Validate the SQL query using If Else node
SQL QnA
4. Custom function to execute the SQL query, and get the response
Webhook Tool
5. Return a natural response from the executed SQL response

In this example, we are going to create a QnA chatbot that can interact with a SQL database stored
Powered By GitBook in SingleStore

TL;DR
You can find the chatflow template:

SQL Chatflow.json 54KB


Code

1) SQL Database Schema + Example Rows


Use a Custom JS Function node to connect to SingleStore, retrieve database schema and top 3
rows.

From the research paper, it is recommended to generate a prompt with following example format:

CREATE TABLE samples (firstName varchar NOT NULL, lastName varchar)


SELECT * FROM samples LIMIT 3
firstName lastName
Stephen Tyler
Jack McGinnis
Steven Repici

Full Javascript Code

You can find more on how to get the HOST , USER , PASSWORD from this guide. Once finished,
click Execute:

We can now see the correct format has been generated. Next step is to bring this into Prompt
Template.

2) Return a SQL query with few shot prompting


Create a new Chat Model + Prompt Template + LLMChain

Specify the following prompt in the Prompt Template:

Based on the provided SQL table schema and question below, return a SQL SELECT ALL query that would answer the user's question. For example: SELECT * FROM tabl
------------
SCHEMA: {schema}
------------
QUESTION: {question}
------------
SQL QUERY:

Since we are using 2 variables: {schema} and {question}, specify their values in Format Prompt
Values:

You can provide more examples to the prompt (i.e few-shot prompting) to let the LLM
learns better. Or take reference from dialect-specific prompting

3) Validate the SQL query using If Else node


Sometimes the SQL query is invalid, and we do not want to waste resources the execute invalid
SQL query. For example, if user is asking general question that is irrelevant to the SQL database.
We can use If Else node to route to different path.

For instance, we can perform a basic check to see if SELECT and WHERE are included in the SQL
query given by LLM.

If Function Else Function

const sqlQuery = $sqlQuery.trim();

if (sqlQuery.includes("SELECT") && sqlQuery.includes("WHERE")) {


return sqlQuery;
}

In the Else Function, we will route to a Prompt Template + LLMChain that basically tells LLM that it
is unable to answer user query:

4) Custom function to execute SQL query, and get the


response
If it is a valid SQL query, we need to execute the query. Connect the True output from If Else node
to a Custom JS Function node:

Full Javascript Code

5) Return a natural response from the executed SQL


response
Create a new Chat Model + Prompt Template + LLMChain

Write the following prompt in the Prompt Template:

Based on the question, and SQL response, write a natural language response, be details as possible:
------------
QUESTION: {question}
------------
SQL RESPONSE: {sqlResponse}
------------
NATURAL LANGUAGE RESPONSE:

Specify the variables in Format Prompt Values:

Voila! Your SQL chatbot is now ready for testing!

Query
First, let's ask something related to the database.

Looking at the logs, we can see the first LLMChain is able to give us a SQL query:

Input:

Based on the provided SQL table schema and question below, return a SQL SELECT ALL
query that would answer the user's question. For example: SELECT * FROM table WHERE
id = '1'.\n------------\nSCHEMA: CREATE TABLE samples (id bigint(20) NOT NULL,
firstName varchar(300) NOT NULL, lastName varchar(300) NOT NULL, userAddress
varchar(300) NOT NULL, userState varchar(300) NOT NULL, userCode varchar(300) NOT
NULL, userPostal varchar(300) NOT NULL, createdate timestamp(6) NOT NULL)\nSELECT *
FROM samples LIMIT 3\nid firstName lastName userAddress userState userCode userPostal
createdate\n1125899906842627 Steven Repici 14 Kingston St. Oregon NJ 5578 Thu Dec 14
2023 13:06:17 GMT+0800 (Singapore Standard Time)\n1125899906842625 John Doe 120
jefferson st. Riverside NJ 8075 Thu Dec 14 2023 13:04:32 GMT+0800 (Singapore Standard
Time)\n1125899906842629 Bert Jet 9th, at Terrace plc Desert City CO 8576 Thu Dec 14
2023 13:07:11 GMT+0800 (Singapore Standard Time)\n------------\nQUESTION: what is the
address of John\n------------\nSQL QUERY:

Output

SELECT userAddress FROM samples WHERE firstName = 'John'

After executing the SQL query, the result is passed to the 2nd LLMChain:

Input

Based on the question, and SQL response, write a natural language response, be
details as possible:\n------------\nQUESTION: what is the address of John\n----------
--\nSQL RESPONSE: [{\"userAddress\":\"120 jefferson st.\"}]\n------------\nNATURAL
LANGUAGE RESPONSE:

Output

The address of John is 120 Jefferson St.

Now, we if ask something that is irrelevant to the SQL database, the Else route is taken.

For first LLMChain, a SQL query is generated as below:

SELECT * FROM samples LIMIT 3

However, it fails the If Else check because it doesn't contains both SELECT and WHERE , hence
entering the Else route that has a prompt that says:

Politely say "I'm not able to answer query"

And the final output is:

I apologize, but I'm not able to answer your query at the moment.

Conclusion
In this example, we have successfully created a SQL chatbot which can interact with your
database, and also able to handle question that is irrelevant to database. Futher improvement
includes adding memory to provide conversation history.

You can find the chatflow below:

SQL Chatflow.json 54KB


Code

Previous Next
Multiple Documents QnA Webhook Tool

Last modified 19d ago


FlowiseAI Ask or search… ⌘ K

Welcome to Flowise O N TH IS PAG E

Getting Started Webhook Tool Make

Flowise
Contributing
Tutorials
Using Flowise
In this use case tutorial, we are going to create a custom tool that will be able to call a webhook
Configuration endpoint, and pass in the necessary parameters into the webhook body. We'll be using Make.com
to create the webhook workflow.
Integrations

Migration Guide
Make
Use Cases
Head over to Make.com, after registering an account, create a workflow that has a Webhook
Web Scrape QnA module and Discord module, which looks like below:
Multiple Documents QnA

SQL QnA

Webhook Tool

From the Webhook module, you should be able to see a webhook URL:

Powered By GitBook

From the Discord module, we are passing the message body from the Webhook as the message
to send to Discord channel:

To test it out, you can click Run once at the bottom left corner, and send a POST request with a
JSON body

{
"message": "Hello Discord!"
}

You'll be able to see a Discord message sent to the channel:

Perfect! We have successfully configured a workflow that is able to pass a message and send to
Discord channel

Flowise
In Flowise, we are going to create a custom tool that is able to call the Webhook POST request,
with the message body.

From the dashboard, click Tools, then click Create

We can then fill in the following fields (feel free to change this according to your needs):

Tool Name: make_webhook (must be in snake_case)

Tool Description: Useful when you need to send message to Discord

Tool Icon Src: https://github.com/FlowiseAI/Flowise/assets/26460777/517fdab2-8a6e-4781-


b3c8-fb92cc78aa0b

Output Schema:

JavaScript Function:

const fetch = require('node-fetch');


const webhookUrl = 'https://hook.eu1.make.com/abcdef';
const body = {
"message": $message
};
const options = {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(body)
};
try {
const response = await fetch(webhookUrl, options);
const text = await response.text();
return text;
} catch (error) {
console.error(error);
return '';
}

Click Add to save the custom tool, and you should be able to see it now:

Now, create a new canvas with following nodes:

Buffer Memory

ChatOpenAI

Custom Tool (select the make_webhook tool we just created)

OpenAI Function Agent

It should looks like below after connecting them up:

Save the chatflow, and start testing it!

For example, we can ask question like "how to cook an egg"

Then ask the agent to send all of these to Discord:

Go to the Discord channel, and you will be able to see the message:

That's it! OpenAI Function Agent will be able to automatically figure out what to pass as the
message and send it over to Discord. This is just a quick example of how to trigger a webhook
workflow with dynamic body. The same idea can be applied to workflow that has a webhook and
Gmail, GoogleSheets etc.

You can read more on how to pass chat information like sessionId , flowid and variables to
custom tool - Additional

Tutorials

Watch a step-by-step instruction video on using Webhooks with Flowise custom tools.

Watch how to connect Flowise to Google Sheets using webhooks

Watch how to connect Flowise to Microsoft Excel using webhooks

Previous
SQL QnA

Last modified 2mo ago

You might also like