Function
Function
Link : https://python.langchain.com/v0.1/docs/modules/model_io/chat/function_calling/
Introduction
Function calling with LangChain and Large Language Models (LLMs) is a process
where the model can interpret user queries, identify appropriate pre-defined
functions, and execute these functions to generate a response. This capability allows
for dynamic and context-aware interactions, making LLMs more versatile and
functional.
Key Concepts
Defining Tool Schemas: Tools are defined using schemas that specify the function's
structure and expected inputs. These schemas are necessary for the model to
understand how to use the tools effectively. In LangChain, you can define these
schemas using the @tool decorator on Python functions.
@tool
def get_tempreture(location : str) -> int:
"""fetchs tempreturue in celcius for specific location : location
Args:
location : city or country location on earth
"""
………Operation………
return op
Binding Tools to the Model: To enable the model to invoke tools, you must bind the
tool schemas to the model using the .bind_tools method. This method receives a list
of tool objects, Pydantic classes, or JSON schemas and binds them to the chat model
in the expected format.
chat_model.bind_tools([multiply])
Model Invocation: Once tools are bound, the model can invoke these tools based on
user queries. The model processes the input, identifies the relevant tool, calls the
function with the necessary arguments, and returns the output.
Output Structure: The model returns a list of tool calls and their results. Each tool
call includes the function name, arguments, and the result.
[{'name': 'get_tempreture',
'args': {'location': 'pune'},
'id': ‘abcd'}]
Here you see we got function and it’s arguments to be used to get query
answer
Function Execution: As we get output from tools invoke as function information
from LLM using function calling, we can invoke the LLM to execute function and get
the output from function inside LLM itself.
Finally: At last we can feed content output to LLM to get a short natural language
explanation of Answer for better use understanding.
Conclusion: