0% found this document useful (0 votes)
4 views

python.langchain.com-ChatAgent LangChain documentation

The ChatAgent class is a deprecated agent model in LangChain, which allows for input data parsing and validation to create a model. It provides functionality to specify allowed tools, utilize an LLMChain, and manage output parsing, while also allowing for asynchronous decision-making based on user inputs and intermediate steps. Users are encouraged to use the create_react_agent method instead, as the ChatAgent has been deprecated since version 0.1.0.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

python.langchain.com-ChatAgent LangChain documentation

The ChatAgent class is a deprecated agent model in LangChain, which allows for input data parsing and validation to create a model. It provides functionality to specify allowed tools, utilize an LLMChain, and manage output parsing, while also allowing for asynchronous decision-making based on user inputs and intermediate steps. Users are encouraged to use the create_react_agent method instead, as the ChatAgent has been deprecated since version 0.1.0.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

ChatAgent — 🦜🔗 LangChain documentation

python.langchain.com/v0.2/api_reference/langchain/agents/langchain.agents.chat.base.ChatAgent.html

Bases: Agent

Deprecated since version 0.1.0: Use create_react_agent instead.

Chat Agent.

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param allowed_tools: List[str] | None = None#


Allowed tools for the agent. If None, all tools are allowed.

param llm_chain: LLMChain [Required]#


LLMChain to use for agent.

param output_parser: AgentOutputParser [Optional]#


Output parser for the agent.

Async given input, decided what to do.

Parameters:
intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken
to date, along with observations.

callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) –


Callbacks to run.

**kwargs (Any) – User inputs.

Returns:
Action specifying what tool to use.

Return type:
AgentAction | AgentFinish

classmethod create_prompt(tools: Sequence[BaseTool],


system_message_prefix: str = 'Answer the following questions as best you

1/5
can. You have access to the following tools:', system_message_suffix: str =
'Begin! Reminder to always use the exact characters `Final Answer` when
responding.', human_message: str = '{input}\n\n{agent_scratchpad}',
format_instructions: str = 'The way you use the tools is by specifying a json
blob.\nSpecifically, this json should have a `action` key (with the name of the
tool to use) and a `action_input` key (with the input to the tool going
here).\n\nThe only values that should be in the "action" field are:
{tool_names}\n\nThe $JSON_BLOB should only contain a SINGLE action, do
NOT return a list of multiple actions. Here is an example of a valid
$JSON_BLOB:\n\n```\n{{{{\n "action": $TOOL_NAME,\n "action_input":
$INPUT\n}}}}\n```\n\nALWAYS use the following format:\n\nQuestion: the input
question you must answer\nThought: you should always think about what to
do\nAction:\n```\n$JSON_BLOB\n```\nObservation: the result of the action\n...
(this Thought/Action/Observation can repeat N times)\nThought: I now know
the final answer\nFinal Answer: the final answer to the original input question',
input_variables: List[str] | None = None) → BasePromptTemplate[source]#
Create a prompt from a list of tools.

Parameters:
tools (Sequence[BaseTool]) – A list of tools.

system_message_prefix (str) – The system message prefix. Default is


SYSTEM_MESSAGE_PREFIX.

system_message_suffix (str) – The system message suffix. Default is


SYSTEM_MESSAGE_SUFFIX.

human_message (str) – The human message. Default is HUMAN_MESSAGE.

format_instructions (str) – The format instructions. Default is


FORMAT_INSTRUCTIONS.

input_variables (List[str] | None) – The input variables. Default is None.

Returns:
A prompt template.

Return type:
BasePromptTemplate

Construct an agent from an LLM and tools.

2/5
Parameters:
llm (BaseLanguageModel) – The language model.

tools (Sequence[BaseTool]) – A list of tools.

callback_manager (BaseCallbackManager | None) – The callback manager.


Default is None.

output_parser (AgentOutputParser | None) – The output parser. Default is


None.

system_message_prefix (str) – The system message prefix. Default is


SYSTEM_MESSAGE_PREFIX.

system_message_suffix (str) – The system message suffix. Default is


SYSTEM_MESSAGE_SUFFIX.

human_message (str) – The human message. Default is HUMAN_MESSAGE.

format_instructions (str) – The format instructions. Default is


FORMAT_INSTRUCTIONS.

input_variables (List[str] | None) – The input variables. Default is None.

kwargs (Any) – Additional keyword arguments.

Returns:
An agent.

Return type:
Agent

get_allowed_tools() → List[str] | None#


Get allowed tools.

Return type:
List[str] | None

get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs:


Any) → Dict[str, Any]#
Create the full inputs for the LLMChain from intermediate steps.

Parameters:

3/5
intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken
to date, along with observations.

**kwargs (Any) – User inputs.

Returns:
Full inputs for the LLMChain.

Return type:
Dict[str, Any]

Given input, decided what to do.

Parameters:
intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken
to date, along with observations.

callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) –


Callbacks to run.

**kwargs (Any) – User inputs.

Returns:
Action specifying what tool to use.

Return type:
AgentAction | AgentFinish

Return response when agent has been stopped due to max iterations.

Parameters:
early_stopping_method (str) – Method to use for early stopping.

intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken


to date, along with observations.

**kwargs (Any) – User inputs.

Returns:
Agent finish object.

Return type:
AgentFinish

4/5
Raises:
ValueError – If early_stopping_method is not in [‘force’, ‘generate’].

save(file_path: Path | str) → None#


Save the agent.

Parameters:
file_path (Path | str) – Path to file to save the agent to.

Return type:
None

Example: .. code-block:: python

# If working with agent executor agent.agent.save(file_path=”path/agent.yaml”)

tool_run_logging_kwargs() → Dict#
Return logging kwargs for tool run.

Return type:
Dict

property llm_prefix: str#


Prefix to append the llm call with.

property observation_prefix: str#


Prefix to append the observation with.

property return_values: List[str]#


Return values of the agent.

5/5

You might also like