Index
Index
Image Uploads
Contributing
Speech to Text
Using Flowise
Vector Upsert API
API
Prediction API
Document Loaders with Upload
Message API
Embed
Request Body Tutorials
Variables
Telemetry
Migration Guide Provide list of history messages to the flow. Only works when using
history
Short Term Memory
Use Cases
You can use the chatflow as API and connect to frontend applications.
Powered By GitBook
You also have the flexibility to override input configuration with overrideConfig property.
Python Javascript
import requests
API_URL = "http://localhost:3000/api/v1/prediction/<chatlfowid>"
def query(payload):
response = requests.post(API_URL, json=payload)
return response.json()
output = query({
"question": "Hey, how are you?",
"overrideConfig": {
"returnSourceDocuments": true
},
"history": [
{
"message": "Hello, how can I assist you?",
"type": "apiMessage"
},
{
"type": "userMessage",
"message": "Hello I am Bob"
},
{
"type": "apiMessage",
"message": "Hello Bob! how can I assist you?"
}
]
})
Image Uploads
When Allow Image Upload is enabled, images can be uploaded from chat interface.
Python Javascript
import requests
API_URL = "http://localhost:3000/api/v1/prediction/<chatlfowid>"
def query(payload):
response = requests.post(API_URL, json=payload)
return response.json()
output = query({
"question": "Hey, how are you?",
"uploads": [
{
"data": 'data:image/png;base64,iVBORw0KGgdM2uN0', #base64 string
"type": 'file',
"name": 'Flowise.png',
"mime": 'image/png'
}
]
})
Speech to Text
When Speech to Text is enabled, users can speak directly into microphone and speech will be
transcribed into text.
Python Javascript
import requests
API_URL = "http://localhost:3000/api/v1/prediction/<chatlfowid>"
def query(payload):
response = requests.post(API_URL, json=payload)
return response.json()
output = query({
"question": "Hey, how are you?",
"uploads": [
{
"data": 'data:audio/webm;codecs=opus;base64,GkXf', #base64 string
"type": 'audio',
"name": 'audio.wav',
"mime": 'audio/webm'
}
]
})
POST /api/v1/vector/upsert/{your-chatflowid}
Request Body
Key Description
Node ID of the vector store. When you have multiple vector stores in
a flow, you might not want to upsert all of them. Specifying
stopNodeId
stopNodeId will ensure only that specific vector store node is
upserted.
If the flow contains Document Loaders with Upload File functionality, the API looks slightly
different. Instead of passing body as JSON, form-data is being used. This allows you to upload any
files to the API.
It is user's responsibility to make sure the file type is compatible with the expected file
type from document loader. For example, if a Text File Loader is being used, you should
only upload file with .txt extension.
Python Javascript
import requests
API_URL = "http://localhost:3000/api/v1/vector/upsert/<chatlfowid>"
body_data = {
"returnSourceDocuments": True
}
def query(form_data):
response = requests.post(API_URL, files=form_data, data=body_data)
print(response)
return response.json()
output = query(form_data)
print(output)
Python Javascript
import requests
API_URL = "http://localhost:3000/api/v1/vector/upsert/<chatlfowid>"
def query(form_data):
response = requests.post(API_URL, json=payload)
print(response)
return response.json()
output = query({
"overrideConfig": { # optional
"returnSourceDocuments": true
}
})
print(output)
Message API
GET /api/v1/chatmessage/{your-chatflowid}
DELETE /api/v1/chatmessage/{your-chatflowid}
Query Parameters
sessionId string
startDate string
endDate string
Tutorials
Previous Next
Using Flowise Streaming
Welcome to Flowise
Using Flowise
Flowise supports streaming back to your front end application when the final node is a Chain or
API OpenAI Function Agent.
Streaming
Embed
Variables
Analytic
Telemetry
Configuration
Integrations
Migration Guide
Use Cases
Powered By GitBook
2. Import it
3. Establish connection
4. Listen to connection
socket.on('connect', () => {
setSocketIOClientId(socket.id)
});
query({
"question": "Hey, how are you?",
"socketIOClientId": socketIOClientId
}).then((response) => {
console.log(response);
});
socket.on('start', () => {
console.log('start');
});
socket.on('end', () => {
console.log('end');
});
6. Disconnect connection
socket.disconnect();
Previous Next
API Embed
Welcome to Flowise
Using Flowise
You can embed a chat widget to your website. Simply copy paste the embedded code provided to
API anywhere in the <body> tag of your html file.
Streaming
Embed
Variables
Analytic
Telemetry
Configuration
Integrations
Migration Guide
Use Cases
Powered By GitBook
You can also customize your own embedded chat widget UI and pass chatflowConfig JSON object
to override existing config. See configuration list.
To modify the full source code of embedded chat widget, follow these steps:
2. Then you can make any code changes. One of the popular ask is to remove Flowise branding.
Replace username to your Github username, and forked-repo to your forked repo.
<script type="module">
import Chatbot from "https://cdn.jsdelivr.net/gh/username/forked-repo/dist/web.js"
Chatbot.init({
chatflowid: "chatflow-id",
apiHost: "http://localhost:3000",
})
</script>
<script type="module">
import Chatbot from "https://cdn.jsdelivr.net/gh/HenryHengZJ/FlowiseChatEmbed-Test/dist/web.js"
Chatbot.init({
chatflowid: "chatflow-id",
apiHost: "http://localhost:3000",
})
</script>
Tutorials
Previous Next
Streaming Variables
Runtime
Contributing
Resources
Using Flowise
Flowise allow users to create variables that can be used in Custom Tool Function.
API
For example, you have a database URL that you do not want it to be exposed on the function, but
Streaming you still want the function to be able to read the URL from your environment variable.
Embed User can create a variable and get the variable in Custom Tool Function:
Variables $vars.<variable-name>
Analytic
Variables can be Static or Runtime.
Telemetry
Static
Configuration
Integrations Static variable will be saved with the value specified, and retrieved as it is.
Migration Guide
Use Cases
Runtime
Value of the variable will be fetched from .env file using process.env
Powered By GitBook
Resources
Previous Next
Embed Analytic
Welcome to Flowise
Using Flowise
App level
Configuration
Chatflow level
Auth
App Level
Previous Next
Deployment
Environment Variables
Last modified 2mo ago
Databases
Rate Limit
Integrations
Migration Guide
Use Cases
Powered By GitBook
FlowiseAI Ask or search… ⌘ K
LangSmith Tracing
Contributing
Built-In and External Dependencies
Using Flowise
Flowise support different environment variables to configure your instance. You can specify the Debug and Logs
Configuration following variables in the .env file inside packages/server folder. Refer to .env.example file.
Credential
Auth NPM
Variable Description Type Default
Docker
Deployment
The HTTP port Render
Environment Variables PORT Number 3000
Flowise runs on
Railway
Databases
FLOWISE_USERNAME Username to login String
Rate Limit
FLOWISE_PASSWORD Password to login String
Integrations
Print logs onto
Migration Guide DEBUG Boolean
terminal/console
Use Cases
Location where your-home-
BLOB_STORAGE_PATH uploaded files are String dir/.flowise/stor
stored age
Location where
encryption key (used Flowise/packages
SECRETKEY_PATH String
to encrypt/decrypt /server
credentials) is saved
Encryption key to be
FLOWISE_SECRETKEY_O used instead of the
String
VERWRITE key stored in
SECRETKEY_PATH
Enum String:
Powered By GitBook Different log levels for info ,
LOG_LEVEL info
loggers to be saved verbose ,
debug
NodeJS built-in
TOOL_FUNCTION_BUILT
modules to be used String
IN_DEP
for Tool Function
External modules to
TOOL_FUNCTION_EXTE
be used for Tool String
RNAL_DEP
Function
Enum String:
Type of database to sqlite ,
DATABASE_TYPE sqlite
store the flowise data mysql ,
postgres
Location where
database is saved
your-home-
DATABASE_PATH (When String
dir/.flowise
DATABASE_TYPE is
sqlite)
Host URL or IP
address (When
DATABASE_HOST String
DATABASE_TYPE is
not sqlite)
Database username
(When
DATABASE_USER String
DATABASE_TYPE is
not sqlite)
Database password
(When
DATABASE_PASSWORD String
DATABASE_TYPE is
not sqlite)
Database name
(When
DATABASE_NAME String
DATABASE_TYPE is
not sqlite)
LangSmith Tracing
Flowise supports LangSmith tracing with the following env variables:
Project to trace on
LANGCHAIN_PROJECT String
LangSmith
LOG_LEVEL : Different log levels for loggers to be saved. Can be error , info , verbose , or
debug. By default it is set to info, only logger.info will be saved to the log files. If you
want to have complete details, set to debug .
Credential
Flowise store your third party API keys as encrypted credentials using an encryption key.
By default, a random encryption key will be generated when starting up the application and stored
under a file path. This encryption key is then retrieved everytime to decrypt the credentials used
within a chatflow. For example, your OpenAI API key, Pinecone API key, etc.
For some reasons, sometimes encryption key might be re-generated or the stored path was
changed, this will cause errors like - Credentials could not be decrypted. To avoid this, you can set
your own encryption key as FLOWISE_SECRETKEY_OVERWRITE , so that the same encryption key
will be used everytime. There is no restriction on the format, you can set it as any text that you
want, or the same as your FLOWISE_PASSWORD .
Credential API Key returned from the UI is not the same length as your original Api Key
that you have set. This is a fake prefix string that prevents network spoofing, that's why
we are not returning the Api Key back to UI. However, the correct Api Key will be
retrieved and used during your interaction with the chatflow.
NPM
You can set all these variables when running Flowise using npx. For example:
Docker
You can set all these variables in the .env file inside docker folder. Refer to .env.example file.
Render
Railway
Previous Next
Sealos Databases
MySQL
Contributing
PostgreSQL
Using Flowise
Flowise supports 3 database types: Synchronize in Production
Configuration
Tutorial: How to use Flowise dat…
SQLite
Auth
MySQL
Deployment
PostgreSQL
Environment Variables
SQLite will be the default database. These databases can be configured with following env
Databases
variables:
Rate Limit
SQLite
Integrations
A database.sqlite file will be created and saved in the path specified by DATABASE_PATH . If
not specified, the default store path will be in your home directory → .flowise
MySQL
DATABASE_TYPE=mysql
DATABASE_PORT=3306
DATABASE_HOST=localhost
DATABASE_NAME=flowise
DATABASE_USER=user
DATABASE_PASSWORD=123
PostgreSQL
DATABASE_TYPE=postgres
DATABASE_PORT=5432
DATABASE_HOST=localhost
Powered By GitBook DATABASE_NAME=flowise
DATABASE_USER=user
DATABASE_PASSWORD=123
If none of the env variables is specified, SQLite will be the fallback database choice.
Synchronize in Production
Flowise uses Typeorm to configure database connection. By default, synchronize is set to true.
This indicates if database schema should be auto created on every application launch.
However, we have to be careful with this option and don't use this in production - otherwise you
can lose production data. This option is useful during debug and development.
OVERRIDE_DATABASE=false
Previous Next
Environment Variables Rate Limit
Welcome to Flowise
Using Flowise
When you share your chatflow to public with no API authorization through API or embedded chat,
Configuration anybody can access the flow. To prevent spamming, you can set the rate limit on your chatflow.
Auth
Deployment
Environment Variables
Databases
Rate Limit
Integrations
Migration Guide
Use Cases
Message Limit per Duration: How many messages can be received in a specific duration. Ex:
20
Limit Message: What message to return when the limit is exceeded. Ex: Quota Exceeded
Using the example above, that means only 20 messages are allowed to be received in 60 seconds.
The rate limitation is tracked by IP-address. If you have deployed Flowise on cloud service, you'll
have to set NUMBER_OF_PROXIES env variable.
Powered By GitBook
3. Restart Cloud-Hosted Flowise Service: This enables Flowise to apply changes of environment
variables.
5. Compare IP Address After making the request, compare the IP address returned to your
current IP address. You can find your current IP address by visiting either of these websites:
http://ip.nfriedly.com/
https://api.ipify.org/
6. Incorrect IP Address: If the returned IP address does not match your current IP address,
increase NUMBER_OF_PROXIES by 1 and restart Cloud-Hosted Flowise. Repeat this process
until the IP address matches your own.
Previous Next
Databases Integrations
Welcome to Flowise
Using Flowise
LlamaIndex is a data framework for LLM applications to ingest, structure, and access private or
Configuration domain-specific data. It has advanced retrieval techniques for designing RAG (Retrieval
Augmented Generation) apps.
Integrations
Langchain
Previous Next
LlamaIndex Vectara Response Synthesizer
Response Synthesizer
Engine
Last modified 1mo ago
Tools
Utilities
External Integrations
Migration Guide
Use Cases
Powered By GitBook
FlowiseAI Ask or search… ⌘ K
Welcome to Flowise
Configuration
Langchain
Set/Get Variable
External Integrations
Migration Guide
Use Cases
Powered By GitBook
FlowiseAI Ask or search… ⌘ K
Setup
Contributing
Receive Trigger Message
Using Flowise
Filter out Zapier Bot's Message
Configuration Prerequisite
FlowiseAI generate Result Mess…
Integrations Send Result Message
1. Log in or sign up to Zapier
Langchain 2. Refer deployment to create a cloud hosted version of Flowise.
LlamaIndex
Utilities Setup
External Integrations
1. Go to Zapier Zaps
Zapier Zaps
2. Click Create
Migration Guide
Use Cases
Powered By GitBook
6. Select your prefered channel to interact with Zapier Bot then click Continue
9. Select your message then click Continue with the selected record
2. Configure Filter to not continue if received message from Zapier Bot then click Continue
3. Click Sign in and insert your details, then click Yes, Continue to FlowiseAI
4. Select Content from Discord and your Flow ID, then click Continue
3. Select the Discord's account that you signed in, then click Continue
4. Select your prefered Channel for channel and select Text and String Source (if available) from
FlowiseAI for Message Text, then click Continue
6. Voila you should see the message arrived in your Discord Channel
Previous Next
External Integrations Migration Guide
Use Cases
Upsert
Web Scrape QnA
We are going to use Cheerio Web Scraper node to scrape links from a given URL.
Multiple Documents QnA
HtmlToMarkdown Text Splitter to split the scraped content into smaller pieces.
SQL QnA
Webhook Tool
If you do not specify anything, by default only the given URL page will be scraped. If you want to
crawl the rest of relative links, click Additional Parameters of Cheerio Web Scraper.
Powered By GitBook
1. Select Web Crawl or Scrape XML Sitemap in Get Relative Links Method.
2. Input 0 in Get Relative Links Limit to retrieve all links available from the provided URL.
2. Click Fetch Links to retrieve links based on the inputs of the Get Relative Links Method and
Get Relative Links Limit in Additional Parameters.
3. In Crawled Links section, remove unwanted links by clicking Red Trash Bin Icon.
3. Splitted data will be looped over, and converted to vector embeddings using OpenAI
Embeddings
Navigate to Pinecone dashboard, you will be able to see new vectors being added.
Query
Querying is relatively straight-forward. After you have verified that data is upserted to vector
database, you can start asking question in the chat:
In the Additional Parameters of Conversational Retrieval QA Chain, you can specify 2 prompts:
Rephrase Prompt: Used to rephrase the question given the past conversation history
Response Prompt: Using the rephrased question, retrieve the context from vector database,
and return a final response
It is recommended to specify a detailed response prompt message. For example, you can
specify the name of AI, the language to answer, the response when answer its not found
(to prevent hallucination).
You can also turn on the Return Source Documents option to return a list of document chunks
where the AI's response is coming from.
Puppeteer: Puppeteer is a Node.js library that provides a high-level API for controlling headless
Chrome or Chromium. You can use Puppeteer to automate web page interactions, including
extracting data from dynamic web pages that require JavaScript to render.
Playwright: Playwright is a Node.js library that provides a high-level API for controlling multiple
browser engines, including Chromium, Firefox, and WebKit. You can use Playwright to automate
web page interactions, including extracting data from dynamic web pages that require
JavaScript to render.
Apify: Apify is a cloud platform for web scraping and data extraction, which provides an
ecosystem of more than a thousand ready-made apps called Actors for various web scraping,
crawling, and data extraction use cases.
The same logic can be applied to any document use cases, not just limited to web
scraping!
If you have any suggestion on how to improve the performance, we'd love your contribution!
Previous Next
Use Cases Multiple Documents QnA
Query
Contributing
Agent
Using Flowise
From the last Web Scrape QnA example, we are only upserting and querying 1 website. What if we Conclusion
Configuration have multiple websites, or multiple documents? Let's take a look and see how we can achieve that.
Integrations In this example, we are going to perform QnA on 2 PDFs, which are FORM-10K of APPLE and
TESLA.
Migration Guide
Use Cases
SQL QnA
Webhook Tool
Upsert
1. Fnd the example flow called - Conversational Retrieval QA Chain from the marketplace
templates.
2. We are going to use PDF File Loader, and upload the respective files:
Powered By GitBook
3. Click the Additional Parameters of PDF File Loader, and specify metadata object. For
instance, PDF File with Apple FORM-10K uploaded can have a metadata object {source:
apple} , whereas PDF File with Tesla FORM-10K uploaded can have {source: tesla} . This
is done to seggregate the documents during retrieval time.
5. Navigate to Pinecone dashboard, you will be able to see new vectors being added.
Query
1. After verifying data has been upserted to Pinecone, we can now start asking question in the
chat!
2. However, the context retrieved used to return the answer is a mix of both APPLE and TESLA
documents. As you can see from the Source Documents:
3. We can fix this by specifying a metadata filter from the Pinecone node. For example, if we only
want to retrieve context from APPLE FORM-10K, we can look back at the metadata we have
specified earlier in the Upsert step, then use the same in the Metadata Filter below:
4. Let's ask the same question again, we should now see all context retrieved are indeed from
APPLE FORM-10K:
Each vector databse provider has different format of filtering syntax, recommend to read
through the respective vector database documentation
5. However, the problem with this is that metadata filtering is sort of "hard-coded". Ideally, we
should let the LLM to decide which document to retrieve based on the question.
Agent
We can solve the "hard-coded" metadata filter problem by using Function Calling Agent.
By providing tools to agent, we can let the agent to decide which tool is suitable to be used
depending on the question.
Name: search_apple
Description: Use this function to answer user questions about Apple Inc (APPL). It contains a
SEC Form 10K filing describing the financials of Apple Inc (APPL) for the 2022 time period.
Name: search_tsla
Description: Use this function to answer user questions about Tesla Inc (TSLA). It contains a
SEC Form 10K filing describing the financials of Tesla Inc (TSLA) for the 2022 time period.
It is important to specify a clear and concise description. This allows LLM to better
decide when to use which tool
4. Now, we need to create a general instruction to OpenAI Function Agent. Click Additional
Parameters of the node, and specify the System Message. For example:
You are an expert financial analyst that always answers questions with the most relevant information using the tools at your disposal.
These tools have information regarding companies that the user has expressed interest in.
Here are some guidelines that you must follow:
* For financial questions, you must use the tools to find the answer and then write a response.
* Even if it seems like your tools won't be able to answer the question, you must still use them to find the most relevant information and insights. Not using
* You may assume that the users financial questions are related to the documents they've selected.
* For any user message that isn't related to financial analysis, respectfully decline to respond and suggest that the user ask a relevant question.
* If your tools are unable to find an answer, you should say that you haven't found an answer but still relay any useful information the tools found.
* Dont ask clarifying questions, just return answer.
The tools at your disposal have access to the following SEC documents that the user has selected to discuss with you:
- Apple Inc (APPL) FORM 10K 2022
- Tesla Inc (TSLA) FORM 10K 2022
7. We are now able to ask question about any documents that we've previously upserted to
vector database without "hard-coding" the metadata filtering by using tools + agent.
Conclusion
We've covered using Conversational Retrieval QA Chain and its limitation when querying multiple
documents. And we were able to overcome the issue by using OpenAI Function Agent + Tools. You
can find the template below:
Previous Next
Web Scrape QnA SQL QnA
In this example, we are going to create a QnA chatbot that can interact with a SQL database stored
Powered By GitBook in SingleStore
TL;DR
You can find the chatflow template:
From the research paper, it is recommended to generate a prompt with following example format:
You can find more on how to get the HOST , USER , PASSWORD from this guide. Once finished,
click Execute:
We can now see the correct format has been generated. Next step is to bring this into Prompt
Template.
Based on the provided SQL table schema and question below, return a SQL SELECT ALL query that would answer the user's question. For example: SELECT * FROM tabl
------------
SCHEMA: {schema}
------------
QUESTION: {question}
------------
SQL QUERY:
Since we are using 2 variables: {schema} and {question}, specify their values in Format Prompt
Values:
You can provide more examples to the prompt (i.e few-shot prompting) to let the LLM
learns better. Or take reference from dialect-specific prompting
For instance, we can perform a basic check to see if SELECT and WHERE are included in the SQL
query given by LLM.
In the Else Function, we will route to a Prompt Template + LLMChain that basically tells LLM that it
is unable to answer user query:
Based on the question, and SQL response, write a natural language response, be details as possible:
------------
QUESTION: {question}
------------
SQL RESPONSE: {sqlResponse}
------------
NATURAL LANGUAGE RESPONSE:
Query
First, let's ask something related to the database.
Looking at the logs, we can see the first LLMChain is able to give us a SQL query:
Input:
Based on the provided SQL table schema and question below, return a SQL SELECT ALL
query that would answer the user's question. For example: SELECT * FROM table WHERE
id = '1'.\n------------\nSCHEMA: CREATE TABLE samples (id bigint(20) NOT NULL,
firstName varchar(300) NOT NULL, lastName varchar(300) NOT NULL, userAddress
varchar(300) NOT NULL, userState varchar(300) NOT NULL, userCode varchar(300) NOT
NULL, userPostal varchar(300) NOT NULL, createdate timestamp(6) NOT NULL)\nSELECT *
FROM samples LIMIT 3\nid firstName lastName userAddress userState userCode userPostal
createdate\n1125899906842627 Steven Repici 14 Kingston St. Oregon NJ 5578 Thu Dec 14
2023 13:06:17 GMT+0800 (Singapore Standard Time)\n1125899906842625 John Doe 120
jefferson st. Riverside NJ 8075 Thu Dec 14 2023 13:04:32 GMT+0800 (Singapore Standard
Time)\n1125899906842629 Bert Jet 9th, at Terrace plc Desert City CO 8576 Thu Dec 14
2023 13:07:11 GMT+0800 (Singapore Standard Time)\n------------\nQUESTION: what is the
address of John\n------------\nSQL QUERY:
Output
After executing the SQL query, the result is passed to the 2nd LLMChain:
Input
Based on the question, and SQL response, write a natural language response, be
details as possible:\n------------\nQUESTION: what is the address of John\n----------
--\nSQL RESPONSE: [{\"userAddress\":\"120 jefferson st.\"}]\n------------\nNATURAL
LANGUAGE RESPONSE:
Output
Now, we if ask something that is irrelevant to the SQL database, the Else route is taken.
However, it fails the If Else check because it doesn't contains both SELECT and WHERE , hence
entering the Else route that has a prompt that says:
I apologize, but I'm not able to answer your query at the moment.
Conclusion
In this example, we have successfully created a SQL chatbot which can interact with your
database, and also able to handle question that is irrelevant to database. Futher improvement
includes adding memory to provide conversation history.
Previous Next
Multiple Documents QnA Webhook Tool
Flowise
Contributing
Tutorials
Using Flowise
In this use case tutorial, we are going to create a custom tool that will be able to call a webhook
Configuration endpoint, and pass in the necessary parameters into the webhook body. We'll be using Make.com
to create the webhook workflow.
Integrations
Migration Guide
Make
Use Cases
Head over to Make.com, after registering an account, create a workflow that has a Webhook
Web Scrape QnA module and Discord module, which looks like below:
Multiple Documents QnA
SQL QnA
Webhook Tool
From the Webhook module, you should be able to see a webhook URL:
Powered By GitBook
From the Discord module, we are passing the message body from the Webhook as the message
to send to Discord channel:
To test it out, you can click Run once at the bottom left corner, and send a POST request with a
JSON body
{
"message": "Hello Discord!"
}
Perfect! We have successfully configured a workflow that is able to pass a message and send to
Discord channel
Flowise
In Flowise, we are going to create a custom tool that is able to call the Webhook POST request,
with the message body.
We can then fill in the following fields (feel free to change this according to your needs):
Output Schema:
JavaScript Function:
Click Add to save the custom tool, and you should be able to see it now:
Buffer Memory
ChatOpenAI
Go to the Discord channel, and you will be able to see the message:
That's it! OpenAI Function Agent will be able to automatically figure out what to pass as the
message and send it over to Discord. This is just a quick example of how to trigger a webhook
workflow with dynamic body. The same idea can be applied to workflow that has a webhook and
Gmail, GoogleSheets etc.
You can read more on how to pass chat information like sessionId , flowid and variables to
custom tool - Additional
Tutorials
Watch a step-by-step instruction video on using Webhooks with Flowise custom tools.
Previous
SQL QnA