id
stringlengths
14
15
text
stringlengths
22
2.51k
source
stringlengths
60
153
a5f374ce3625-1
Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param early_stopping_method: str = 'force'¶ The method to use for early stopping if the agent never returns AgentFinish. Either ‘force’ or ‘generate’. “force” returns a string saying that it stopped because it met atime or iteration limit. “generate” calls the agent’s LLM Chain one final time to generatea final answer based on the previous steps. param handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False¶ How to handle errors raised by the agent’s output parser.Defaults to False, which raises the error. sIf true, the error will be sent back to the LLM as an observation. If a string, the string itself will be sent to the LLM as an observation. If a callable function, the function will be called with the exception as an argument, and the result of that function will be passed to the agentas an observation. param max_execution_time: Optional[float] = None¶ The maximum amount of wall clock time to spend in the execution loop. param max_iterations: Optional[int] = 15¶ The maximum number of steps to take before ending the execution loop. Setting to ‘None’ could lead to an infinite loop. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.base.MRKLChain.html
a5f374ce3625-2
them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param return_intermediate_steps: bool = False¶ Whether to return the agent’s trajectory of intermediate steps at the end in addition to the final output. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param tools: Sequence[BaseTool] [Required]¶ The valid tools the agent can call. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.base.MRKLChain.html
a5f374ce3625-3
Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.base.MRKLChain.html
a5f374ce3625-4
chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.base.MRKLChain.html
a5f374ce3625-5
sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example ..code-block:: python chain.dict(exclude_unset=True) # -> {“_type”: “foo”, “verbose”: False, …} classmethod from_agent_and_tools(agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) → AgentExecutor¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.base.MRKLChain.html
a5f374ce3625-6
Create from agent and tools. classmethod from_chains(llm: BaseLanguageModel, chains: List[ChainConfig], **kwargs: Any) → AgentExecutor[source]¶ User friendly way to initialize the MRKL chain. This is intended to be an easy way to get up and running with the MRKL chain. Parameters llm – The LLM to use as the agent LLM. chains – The chains the MRKL system has access to. **kwargs – parameters to be passed to initialization. Returns An initialized MRKL chain. Example from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain from langchain.chains.mrkl.base import ChainConfig llm = OpenAI(temperature=0) search = SerpAPIWrapper() llm_math_chain = LLMMathChain(llm=llm) chains = [ ChainConfig( action_name = "Search", action=search.search, action_description="useful for searching" ), ChainConfig( action_name="Calculator", action=llm_math_chain.run, action_description="useful for doing math" ) ] mrkl = MRKLChain.from_chains(llm, chains) lookup_tool(name: str) → BaseTool¶ Lookup tool by name. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.base.MRKLChain.html
a5f374ce3625-7
Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. validator raise_callback_manager_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.base.MRKLChain.html
a5f374ce3625-8
these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Raise error - saving not supported for Agent Executors. save_agent(file_path: Union[Path, str]) → None¶ Save the underlying agent. validator set_verbose  »  verbose¶ Set the chain verbosity. Defaults to the global setting if not specified by the user. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_return_direct_tool  »  all fields¶ Validate that tools are compatible with agent. validator validate_tools  »  all fields¶ Validate that tools are compatible with agent. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.base.MRKLChain.html
a5f374ce3625-9
constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.base.MRKLChain.html
504377011276-0
langchain.agents.mrkl.base.ZeroShotAgent¶ class langchain.agents.mrkl.base.ZeroShotAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]¶ Bases: Agent Agent for the MRKL chain. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_tools: Optional[List[str]] = None¶ param llm_chain: langchain.chains.llm.LLMChain [Required]¶ param output_parser: langchain.agents.agent.AgentOutputParser [Optional]¶ async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.base.ZeroShotAgent.html
504377011276-1
**kwargs – User inputs. Returns Action specifying what tool to use. classmethod create_prompt(tools: Sequence[BaseTool], prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) → PromptTemplate[source]¶ Create prompt in the style of the zero shot agent. Parameters tools – List of tools the agent will have access to, used to format the prompt. prefix – String to put before the list of tools. suffix – String to put after the list of tools. input_variables – List of input variables the final prompt will expect. Returns A PromptTemplate with the template assembled from the pieces here. dict(**kwargs: Any) → Dict¶ Return dictionary representation of agent.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.base.ZeroShotAgent.html
504377011276-2
dict(**kwargs: Any) → Dict¶ Return dictionary representation of agent. classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) → Agent[source]¶ Construct an agent from an LLM and tools. get_allowed_tools() → Optional[List[str]]¶ get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → Dict[str, Any]¶ Create the full inputs for the LLMChain from intermediate steps. plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.base.ZeroShotAgent.html
504377011276-3
**kwargs – User inputs. Returns Action specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶ Return response when agent has been stopped due to max iterations. save(file_path: Union[Path, str]) → None¶ Save the agent. Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict¶ validator validate_prompt  »  all fields¶ Validate that prompt matches format. property llm_prefix: str¶ Prefix to append the llm call with. property observation_prefix: str¶ Prefix to append the observation with. property return_values: List[str]¶ Return values of the agent.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.base.ZeroShotAgent.html
272cdc382f34-0
langchain.agents.mrkl.output_parser.MRKLOutputParser¶ class langchain.agents.mrkl.output_parser.MRKLOutputParser[source]¶ Bases: AgentOutputParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str[source]¶ Instructions on how the LLM output should be formatted. parse(text: str) → Union[AgentAction, AgentFinish][source]¶ Parse text into agent action/finish. parse_result(result: List[Generation]) → T¶ Parse a list of candidate model Generations into a specific format. The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation. Parameters result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input. Returns Structured output. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Parse the output of an LLM call with the input prompt for context. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – String output of language model. prompt – Input PromptValue. Returns Structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.output_parser.MRKLOutputParser.html
272cdc382f34-1
property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.mrkl.output_parser.MRKLOutputParser.html
4f22d95fe8fd-0
langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent¶ class langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent(*, llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: BasePromptTemplate)[source]¶ Bases: BaseSingleActionAgent An Agent driven by OpenAIs function powered API. Parameters llm – This should be an instance of ChatOpenAI, specifically a model that supports using functions. tools – The tools this agent has access to. prompt – The prompt for this agent, should support agent_scratchpad as one of the variables. For an easy way to construct this prompt, use OpenAIFunctionsAgent.create_prompt(…) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param llm: langchain.base_language.BaseLanguageModel [Required]¶ param prompt: langchain.schema.prompt_template.BasePromptTemplate [Required]¶ param tools: Sequence[langchain.tools.base.BaseTool] [Required]¶ async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations **kwargs – User inputs. Returns Action specifying what tool to use. classmethod create_prompt(system_message: Optional[SystemMessage] = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None) → BasePromptTemplate[source]¶ Create prompt for this agent. Parameters
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent.html
4f22d95fe8fd-1
Create prompt for this agent. Parameters system_message – Message to use as the system message that will be the first in the prompt. extra_prompt_messages – Prompt messages that will be placed between the system message and the new human input. Returns A prompt template to pass into this agent. dict(**kwargs: Any) → Dict¶ Return dictionary representation of agent. classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None, system_message: Optional[SystemMessage] = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), **kwargs: Any) → BaseSingleActionAgent[source]¶ Construct an agent from an LLM and tools. get_allowed_tools() → List[str][source]¶ Get allowed tools. plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations **kwargs – User inputs. Returns Action specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶ Return response when agent has been stopped due to max iterations. save(file_path: Union[Path, str]) → None¶ Save the agent. Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”)
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent.html
4f22d95fe8fd-2
# If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict¶ validator validate_llm  »  all fields[source]¶ validator validate_prompt  »  all fields[source]¶ property functions: List[dict]¶ property input_keys: List[str]¶ Get input keys. Input refers to user input here. property return_values: List[str]¶ Return values of the agent.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent.html
f99624dc204c-0
langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent¶ class langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent(*, llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: BasePromptTemplate)[source]¶ Bases: BaseMultiActionAgent An Agent driven by OpenAIs function powered API. Parameters llm – This should be an instance of ChatOpenAI, specifically a model that supports using functions. tools – The tools this agent has access to. prompt – The prompt for this agent, should support agent_scratchpad as one of the variables. For an easy way to construct this prompt, use OpenAIMultiFunctionsAgent.create_prompt(…) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param llm: langchain.base_language.BaseLanguageModel [Required]¶ param prompt: langchain.schema.prompt_template.BasePromptTemplate [Required]¶ param tools: Sequence[langchain.tools.base.BaseTool] [Required]¶ async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[List[AgentAction], AgentFinish][source]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations **kwargs – User inputs. Returns Action specifying what tool to use. classmethod create_prompt(system_message: Optional[SystemMessage] = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None) → BasePromptTemplate[source]¶ Create prompt for this agent. Parameters
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent.html
f99624dc204c-1
Create prompt for this agent. Parameters system_message – Message to use as the system message that will be the first in the prompt. extra_prompt_messages – Prompt messages that will be placed between the system message and the new human input. Returns A prompt template to pass into this agent. dict(**kwargs: Any) → Dict¶ Return dictionary representation of agent. classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None, system_message: Optional[SystemMessage] = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), **kwargs: Any) → BaseMultiActionAgent[source]¶ Construct an agent from an LLM and tools. get_allowed_tools() → List[str][source]¶ Get allowed tools. plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[List[AgentAction], AgentFinish][source]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations **kwargs – User inputs. Returns Action specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶ Return response when agent has been stopped due to max iterations. save(file_path: Union[Path, str]) → None¶ Save the agent. Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”)
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent.html
f99624dc204c-2
# If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict¶ validator validate_llm  »  all fields[source]¶ validator validate_prompt  »  all fields[source]¶ property functions: List[dict]¶ property input_keys: List[str]¶ Get input keys. Input refers to user input here. property return_values: List[str]¶ Return values of the agent.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent.html
2015fd35f51d-0
langchain.agents.react.base.ReActChain¶ class langchain.agents.react.base.ReActChain(llm: BaseLanguageModel, docstore: Docstore, *, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False)[source]¶ Bases: AgentExecutor Chain that implements the ReAct paper. Example from langchain import ReActChain, OpenAI react = ReAct(llm=OpenAI()) Initialize with the LLM and a docstore. param agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]¶ The agent to run for creating a plan and determining actions to take at each step of the execution loop. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param early_stopping_method: str = 'force'¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.base.ReActChain.html
2015fd35f51d-1
for full details. param early_stopping_method: str = 'force'¶ The method to use for early stopping if the agent never returns AgentFinish. Either ‘force’ or ‘generate’. “force” returns a string saying that it stopped because it met atime or iteration limit. “generate” calls the agent’s LLM Chain one final time to generatea final answer based on the previous steps. param handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False¶ How to handle errors raised by the agent’s output parser.Defaults to False, which raises the error. sIf true, the error will be sent back to the LLM as an observation. If a string, the string itself will be sent to the LLM as an observation. If a callable function, the function will be called with the exception as an argument, and the result of that function will be passed to the agentas an observation. param max_execution_time: Optional[float] = None¶ The maximum amount of wall clock time to spend in the execution loop. param max_iterations: Optional[int] = 15¶ The maximum number of steps to take before ending the execution loop. Setting to ‘None’ could lead to an infinite loop. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None This metadata will be associated with each call to this chain,
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.base.ReActChain.html
2015fd35f51d-2
This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param return_intermediate_steps: bool = False¶ Whether to return the agent’s trajectory of intermediate steps at the end in addition to the final output. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param tools: Sequence[BaseTool] [Required]¶ The valid tools the agent can call. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.base.ReActChain.html
2015fd35f51d-3
chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.base.ReActChain.html
2015fd35f51d-4
tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.base.ReActChain.html
2015fd35f51d-5
these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example ..code-block:: python chain.dict(exclude_unset=True) # -> {“_type”: “foo”, “verbose”: False, …} classmethod from_agent_and_tools(agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) → AgentExecutor¶ Create from agent and tools. lookup_tool(name: str) → BaseTool¶ Lookup tool by name.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.base.ReActChain.html
2015fd35f51d-6
lookup_tool(name: str) → BaseTool¶ Lookup tool by name. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. validator raise_callback_manager_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.base.ReActChain.html
2015fd35f51d-7
The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Raise error - saving not supported for Agent Executors. save_agent(file_path: Union[Path, str]) → None¶ Save the underlying agent. validator set_verbose  »  verbose¶ Set the chain verbosity. Defaults to the global setting if not specified by the user. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.base.ReActChain.html
2015fd35f51d-8
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_return_direct_tool  »  all fields¶ Validate that tools are compatible with agent. validator validate_tools  »  all fields¶ Validate that tools are compatible with agent. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.base.ReActChain.html
bdaeb88ea95b-0
langchain.agents.react.base.ReActDocstoreAgent¶ class langchain.agents.react.base.ReActDocstoreAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]¶ Bases: Agent Agent for the ReAct chain. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_tools: Optional[List[str]] = None¶ param llm_chain: LLMChain [Required]¶ param output_parser: langchain.agents.agent.AgentOutputParser [Optional]¶ async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. classmethod create_prompt(tools: Sequence[BaseTool]) → BasePromptTemplate[source]¶ Return default prompt. dict(**kwargs: Any) → Dict¶ Return dictionary representation of agent. classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, **kwargs: Any) → Agent¶ Construct an agent from an LLM and tools. get_allowed_tools() → Optional[List[str]]¶ get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → Dict[str, Any]¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.base.ReActDocstoreAgent.html
bdaeb88ea95b-1
Create the full inputs for the LLMChain from intermediate steps. plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶ Return response when agent has been stopped due to max iterations. save(file_path: Union[Path, str]) → None¶ Save the agent. Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict¶ validator validate_prompt  »  all fields¶ Validate that prompt matches format. property llm_prefix: str¶ Prefix to append the LLM call with. property observation_prefix: str¶ Prefix to append the observation with. property return_values: List[str]¶ Return values of the agent.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.base.ReActDocstoreAgent.html
5f418ece71b3-0
langchain.agents.react.base.ReActTextWorldAgent¶ class langchain.agents.react.base.ReActTextWorldAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]¶ Bases: ReActDocstoreAgent Agent for the ReAct TextWorld chain. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_tools: Optional[List[str]] = None¶ param llm_chain: LLMChain [Required]¶ param output_parser: langchain.agents.agent.AgentOutputParser [Optional]¶ async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. classmethod create_prompt(tools: Sequence[BaseTool]) → BasePromptTemplate[source]¶ Return default prompt. dict(**kwargs: Any) → Dict¶ Return dictionary representation of agent. classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, **kwargs: Any) → Agent¶ Construct an agent from an LLM and tools. get_allowed_tools() → Optional[List[str]]¶ get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → Dict[str, Any]¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.base.ReActTextWorldAgent.html
5f418ece71b3-1
Create the full inputs for the LLMChain from intermediate steps. plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶ Return response when agent has been stopped due to max iterations. save(file_path: Union[Path, str]) → None¶ Save the agent. Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict¶ validator validate_prompt  »  all fields¶ Validate that prompt matches format. property llm_prefix: str¶ Prefix to append the LLM call with. property observation_prefix: str¶ Prefix to append the observation with. property return_values: List[str]¶ Return values of the agent.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.base.ReActTextWorldAgent.html
5d54632eefeb-0
langchain.agents.react.output_parser.ReActOutputParser¶ class langchain.agents.react.output_parser.ReActOutputParser[source]¶ Bases: AgentOutputParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. parse(text: str) → Union[AgentAction, AgentFinish][source]¶ Parse text into agent action/finish. parse_result(result: List[Generation]) → T¶ Parse a list of candidate model Generations into a specific format. The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation. Parameters result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input. Returns Structured output. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Parse the output of an LLM call with the input prompt for context. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – String output of language model. prompt – Input PromptValue. Returns Structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.output_parser.ReActOutputParser.html
5d54632eefeb-1
property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.react.output_parser.ReActOutputParser.html
a64f7534527d-0
langchain.agents.schema.AgentScratchPadChatPromptTemplate¶ class langchain.agents.schema.AgentScratchPadChatPromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, messages: List[Union[BaseMessagePromptTemplate, BaseMessage]])[source]¶ Bases: ChatPromptTemplate Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param input_variables: List[str] [Required]¶ A list of the names of the variables the prompt template expects. param messages: List[Union[BaseMessagePromptTemplate, BaseMessage]] [Required]¶ param output_parser: Optional[BaseOutputParser] = None¶ How to parse the output of calling an LLM on this formatted prompt. param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of prompt. format(**kwargs: Any) → str¶ Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") format_messages(**kwargs: Any) → List[BaseMessage]¶ Format kwargs into a list of messages. format_prompt(**kwargs: Any) → PromptValue¶ Create Chat Messages. classmethod from_messages(messages: Sequence[Union[BaseMessagePromptTemplate, BaseMessage]]) → ChatPromptTemplate¶ classmethod from_role_strings(string_messages: List[Tuple[str, str]]) → ChatPromptTemplate¶ classmethod from_strings(string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]) → ChatPromptTemplate¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.schema.AgentScratchPadChatPromptTemplate.html
a64f7534527d-1
classmethod from_template(template: str, **kwargs: Any) → ChatPromptTemplate¶ partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶ Return a partial of the prompt template. save(file_path: Union[Path, str]) → None¶ Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_input_variables  »  all fields¶ validator validate_variable_names  »  all fields¶ Validate variable names do not include restricted names. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.schema.AgentScratchPadChatPromptTemplate.html
650f4be6d25e-0
langchain.agents.self_ask_with_search.base.SelfAskWithSearchAgent¶ class langchain.agents.self_ask_with_search.base.SelfAskWithSearchAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]¶ Bases: Agent Agent for the self-ask-with-search paper. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_tools: Optional[List[str]] = None¶ param llm_chain: LLMChain [Required]¶ param output_parser: langchain.agents.agent.AgentOutputParser [Optional]¶ async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. classmethod create_prompt(tools: Sequence[BaseTool]) → BasePromptTemplate[source]¶ Prompt does not depend on tools. dict(**kwargs: Any) → Dict¶ Return dictionary representation of agent. classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, **kwargs: Any) → Agent¶ Construct an agent from an LLM and tools. get_allowed_tools() → Optional[List[str]]¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.self_ask_with_search.base.SelfAskWithSearchAgent.html
650f4be6d25e-1
get_allowed_tools() → Optional[List[str]]¶ get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → Dict[str, Any]¶ Create the full inputs for the LLMChain from intermediate steps. plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶ Return response when agent has been stopped due to max iterations. save(file_path: Union[Path, str]) → None¶ Save the agent. Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict¶ validator validate_prompt  »  all fields¶ Validate that prompt matches format. property llm_prefix: str¶ Prefix to append the LLM call with. property observation_prefix: str¶ Prefix to append the observation with. property return_values: List[str]¶ Return values of the agent.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.self_ask_with_search.base.SelfAskWithSearchAgent.html
dfc753517c04-0
langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain¶ class langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain(llm: BaseLanguageModel, search_chain: Union[GoogleSerperAPIWrapper, SerpAPIWrapper], *, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False)[source]¶ Bases: AgentExecutor Chain that does self-ask with search. Example from langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper search_chain = GoogleSerperAPIWrapper() self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain) Initialize with just an LLM and a search chain. param agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]¶ The agent to run for creating a plan and determining actions to take at each step of the execution loop. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain,
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html
dfc753517c04-1
Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param early_stopping_method: str = 'force'¶ The method to use for early stopping if the agent never returns AgentFinish. Either ‘force’ or ‘generate’. “force” returns a string saying that it stopped because it met atime or iteration limit. “generate” calls the agent’s LLM Chain one final time to generatea final answer based on the previous steps. param handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False¶ How to handle errors raised by the agent’s output parser.Defaults to False, which raises the error. sIf true, the error will be sent back to the LLM as an observation. If a string, the string itself will be sent to the LLM as an observation. If a callable function, the function will be called with the exception as an argument, and the result of that function will be passed to the agentas an observation. param max_execution_time: Optional[float] = None¶ The maximum amount of wall clock time to spend in the execution loop. param max_iterations: Optional[int] = 15¶ The maximum number of steps to take before ending the execution loop. Setting to ‘None’ could lead to an infinite loop. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html
dfc753517c04-2
them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param return_intermediate_steps: bool = False¶ Whether to return the agent’s trajectory of intermediate steps at the end in addition to the final output. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param tools: Sequence[BaseTool] [Required]¶ The valid tools the agent can call. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html
dfc753517c04-3
Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html
dfc753517c04-4
chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html
dfc753517c04-5
sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example ..code-block:: python chain.dict(exclude_unset=True) # -> {“_type”: “foo”, “verbose”: False, …} classmethod from_agent_and_tools(agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) → AgentExecutor¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html
dfc753517c04-6
Create from agent and tools. lookup_tool(name: str) → BaseTool¶ Lookup tool by name. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. validator raise_callback_manager_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html
dfc753517c04-7
info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Raise error - saving not supported for Agent Executors. save_agent(file_path: Union[Path, str]) → None¶ Save the underlying agent. validator set_verbose  »  verbose¶ Set the chain verbosity. Defaults to the global setting if not specified by the user.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html
dfc753517c04-8
Set the chain verbosity. Defaults to the global setting if not specified by the user. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_return_direct_tool  »  all fields¶ Validate that tools are compatible with agent. validator validate_tools  »  all fields¶ Validate that tools are compatible with agent. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.self_ask_with_search.base.SelfAskWithSearchChain.html
4e9043f40a9c-0
langchain.agents.self_ask_with_search.output_parser.SelfAskOutputParser¶ class langchain.agents.self_ask_with_search.output_parser.SelfAskOutputParser(*, followups: Sequence[str] = ('Follow up:', 'Followup:'), finish_string: str = 'So the final answer is: ')[source]¶ Bases: AgentOutputParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param finish_string: str = 'So the final answer is: '¶ param followups: Sequence[str] = ('Follow up:', 'Followup:')¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. parse(text: str) → Union[AgentAction, AgentFinish][source]¶ Parse text into agent action/finish. parse_result(result: List[Generation]) → T¶ Parse a list of candidate model Generations into a specific format. The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation. Parameters result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input. Returns Structured output. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Parse the output of an LLM call with the input prompt for context. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – String output of language model. prompt – Input PromptValue. Returns Structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.self_ask_with_search.output_parser.SelfAskOutputParser.html
4e9043f40a9c-1
Returns Structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.self_ask_with_search.output_parser.SelfAskOutputParser.html
e8b40994026d-0
langchain.agents.structured_chat.base.StructuredChatAgent¶ class langchain.agents.structured_chat.base.StructuredChatAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]¶ Bases: Agent Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_tools: Optional[List[str]] = None¶ param llm_chain: langchain.chains.llm.LLMChain [Required]¶ param output_parser: langchain.agents.agent.AgentOutputParser [Optional]¶ async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.structured_chat.base.StructuredChatAgent.html
e8b40994026d-1
**kwargs – User inputs. Returns Action specifying what tool to use. classmethod create_prompt(tools: Sequence[BaseTool], prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\n  "action": $TOOL_NAME,\n  "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\n  "action": "Final Answer",\n  "action_input": "Final response to human"\n}}}}\n```', input_variables: Optional[List[str]] = None, memory_prompts: Optional[List[BasePromptTemplate]] = None) → BasePromptTemplate[source]¶ Create a prompt for this class. dict(**kwargs: Any) → Dict¶ Return dictionary representation of agent.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.structured_chat.base.StructuredChatAgent.html
e8b40994026d-2
dict(**kwargs: Any) → Dict¶ Return dictionary representation of agent. classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\n  "action": $TOOL_NAME,\n  "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\n  "action": "Final Answer",\n  "action_input": "Final response to human"\n}}}}\n```', input_variables: Optional[List[str]] = None, memory_prompts: Optional[List[BasePromptTemplate]] = None, **kwargs: Any) → Agent[source]¶ Construct an agent from an LLM and tools.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.structured_chat.base.StructuredChatAgent.html
e8b40994026d-3
Construct an agent from an LLM and tools. get_allowed_tools() → Optional[List[str]]¶ get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → Dict[str, Any]¶ Create the full inputs for the LLMChain from intermediate steps. plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶ Return response when agent has been stopped due to max iterations. save(file_path: Union[Path, str]) → None¶ Save the agent. Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict¶ validator validate_prompt  »  all fields¶ Validate that prompt matches format. property llm_prefix: str¶ Prefix to append the llm call with. property observation_prefix: str¶ Prefix to append the observation with. property return_values: List[str]¶ Return values of the agent.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.structured_chat.base.StructuredChatAgent.html
06dd1be5c685-0
langchain.agents.structured_chat.output_parser.StructuredChatOutputParser¶ class langchain.agents.structured_chat.output_parser.StructuredChatOutputParser[source]¶ Bases: AgentOutputParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str[source]¶ Instructions on how the LLM output should be formatted. parse(text: str) → Union[AgentAction, AgentFinish][source]¶ Parse text into agent action/finish. parse_result(result: List[Generation]) → T¶ Parse a list of candidate model Generations into a specific format. The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation. Parameters result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input. Returns Structured output. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Parse the output of an LLM call with the input prompt for context. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – String output of language model. prompt – Input PromptValue. Returns Structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.structured_chat.output_parser.StructuredChatOutputParser.html
06dd1be5c685-1
property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.structured_chat.output_parser.StructuredChatOutputParser.html
9f23a0c04955-0
langchain.agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries¶ class langchain.agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries(*, base_parser: AgentOutputParser = None, output_fixing_parser: Optional[OutputFixingParser] = None)[source]¶ Bases: AgentOutputParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param base_parser: langchain.agents.agent.AgentOutputParser [Optional]¶ param output_fixing_parser: Optional[langchain.output_parsers.fix.OutputFixingParser] = None¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. classmethod from_llm(llm: Optional[BaseLanguageModel] = None, base_parser: Optional[StructuredChatOutputParser] = None) → StructuredChatOutputParserWithRetries[source]¶ get_format_instructions() → str[source]¶ Instructions on how the LLM output should be formatted. parse(text: str) → Union[AgentAction, AgentFinish][source]¶ Parse text into agent action/finish. parse_result(result: List[Generation]) → T¶ Parse a list of candidate model Generations into a specific format. The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation. Parameters result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input. Returns Structured output. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Parse the output of an LLM call with the input prompt for context. The prompt is largely provided in the event the OutputParser wants
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries.html
9f23a0c04955-1
The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – String output of language model. prompt – Input PromptValue. Returns Structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries.html
91acf2451474-0
langchain.agents.tools.InvalidTool¶ class langchain.agents.tools.InvalidTool(*, name: str = 'invalid_tool', description: str = 'Called when tool name is invalid.', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False)[source]¶ Bases: BaseTool Tool that is run when invalid tool name is encountered by agent. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = 'Called when tool name is invalid.'¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.tools.InvalidTool.html
91acf2451474-1
and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = 'invalid_tool'¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.tools.InvalidTool.html
91acf2451474-2
Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.tools.InvalidTool.html
7efd59cd2dad-0
langchain.agents.utils.validate_tools_single_input¶ langchain.agents.utils.validate_tools_single_input(class_name: str, tools: Sequence[BaseTool]) → None[source]¶ Validate tools for single input.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.utils.validate_tools_single_input.html
f6963ee3f9e4-0
langchain.base_language.BaseLanguageModel¶ class langchain.base_language.BaseLanguageModel[source]¶ Bases: Serializable, ABC Base class for all language models. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. abstract async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult[source]¶ Take in a list of prompt values and return an LLMResult. classmethod all_required_field_names() → Set[source]¶ abstract async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str[source]¶ Predict text from text. abstract async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage[source]¶ Predict message from messages. abstract generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult[source]¶ Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int[source]¶ Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int[source]¶ Get the number of tokens in the message. get_token_ids(text: str) → List[int][source]¶ Get the token present in the text. abstract predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str[source]¶ Predict text from text.
rtdocs\api.python.langchain.com\en\latest\base_language\langchain.base_language.BaseLanguageModel.html
f6963ee3f9e4-1
Predict text from text. abstract predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage[source]¶ Predict message from messages. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
rtdocs\api.python.langchain.com\en\latest\base_language\langchain.base_language.BaseLanguageModel.html
6641fc268f69-0
langchain.cache.BaseCache¶ class langchain.cache.BaseCache[source]¶ Bases: ABC Base interface for cache. Methods __init__() clear(**kwargs) Clear cache that can take additional keyword arguments. lookup(prompt, llm_string) Look up based on prompt and llm_string. update(prompt, llm_string, return_val) Update cache based on prompt and llm_string. abstract clear(**kwargs: Any) → None[source]¶ Clear cache that can take additional keyword arguments. abstract lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶ Look up based on prompt and llm_string. abstract update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶ Update cache based on prompt and llm_string.
rtdocs\api.python.langchain.com\en\latest\cache\langchain.cache.BaseCache.html
4c6c5451a380-0
langchain.cache.FullLLMCache¶ class langchain.cache.FullLLMCache(**kwargs)[source]¶ Bases: Base SQLite table for full LLM Cache (all generations). A simple constructor that allows initialization from kwargs. Sets attributes on the constructed instance using the names and values in kwargs. Only keys that are present as attributes of the instance’s class are allowed. These could be, for example, any mapped columns or relationships. Methods __init__(**kwargs) A simple constructor that allows initialization from kwargs. Attributes idx llm metadata prompt registry response idx¶ llm¶ metadata: MetaData = MetaData()¶ prompt¶ registry: RegistryType = <sqlalchemy.orm.decl_api.registry object>¶ response¶
rtdocs\api.python.langchain.com\en\latest\cache\langchain.cache.FullLLMCache.html
a5e78d5237d6-0
langchain.cache.GPTCache¶ class langchain.cache.GPTCache(init_func: Optional[Union[Callable[[Any, str], None], Callable[[Any], None]]] = None)[source]¶ Bases: BaseCache Cache that uses GPTCache as a backend. Initialize by passing in init function (default: None). Parameters init_func (Optional[Callable[[Any], None]]) – init GPTCache function (default – None) Example: .. code-block:: python # Initialize GPTCache with a custom init function import gptcache from gptcache.processor.pre import get_prompt from gptcache.manager.factory import get_data_manager # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: gptcache.Cache, llm str): cache_obj.init(pre_embedding_func=get_prompt, data_manager=manager_factory( manager=”map”, data_dir=f”map_cache_{llm}” ), ) langchain.llm_cache = GPTCache(init_gptcache) Methods __init__([init_func]) Initialize by passing in init function (default: None). clear(**kwargs) Clear cache. lookup(prompt, llm_string) Look up the cache data. update(prompt, llm_string, return_val) Update cache. clear(**kwargs: Any) → None[source]¶ Clear cache. lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶ Look up the cache data. First, retrieve the corresponding cache object using the llm_string parameter, and then retrieve the data from the cache based on the prompt. update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶
rtdocs\api.python.langchain.com\en\latest\cache\langchain.cache.GPTCache.html
a5e78d5237d6-1
Update cache. First, retrieve the corresponding cache object using the llm_string parameter, and then store the prompt and return_val in the cache object.
rtdocs\api.python.langchain.com\en\latest\cache\langchain.cache.GPTCache.html
38ee213b7f09-0
langchain.cache.InMemoryCache¶ class langchain.cache.InMemoryCache[source]¶ Bases: BaseCache Cache that stores things in memory. Initialize with empty cache. Methods __init__() Initialize with empty cache. clear(**kwargs) Clear cache. lookup(prompt, llm_string) Look up based on prompt and llm_string. update(prompt, llm_string, return_val) Update cache based on prompt and llm_string. clear(**kwargs: Any) → None[source]¶ Clear cache. lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶ Look up based on prompt and llm_string. update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶ Update cache based on prompt and llm_string.
rtdocs\api.python.langchain.com\en\latest\cache\langchain.cache.InMemoryCache.html
6cf0ed3dea34-0
langchain.cache.MomentoCache¶ class langchain.cache.MomentoCache(cache_client: momento.CacheClient, cache_name: str, *, ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]¶ Bases: BaseCache Cache that uses Momento as a backend. See https://gomomento.com/ Instantiate a prompt cache using Momento as a backend. Note: to instantiate the cache client passed to MomentoCache, you must have a Momento account. See https://gomomento.com/. Parameters cache_client (CacheClient) – The Momento cache client. cache_name (str) – The name of the cache to use to store the data. ttl (Optional[timedelta], optional) – The time to live for the cache items. Defaults to None, ie use the client default TTL. ensure_cache_exists (bool, optional) – Create the cache if it doesn’t exist. Defaults to True. Raises ImportError – Momento python package is not installed. TypeError – cache_client is not of type momento.CacheClientObject ValueError – ttl is non-null and non-negative Methods __init__(cache_client, cache_name, *[, ttl, ...]) Instantiate a prompt cache using Momento as a backend. clear(**kwargs) Clear the cache. from_client_params(cache_name, ttl, *[, ...]) Construct cache from CacheClient parameters. lookup(prompt, llm_string) Lookup llm generations in cache by prompt and associated model and settings. update(prompt, llm_string, return_val) Store llm generations in cache. clear(**kwargs: Any) → None[source]¶ Clear the cache. Raises SdkException – Momento service or network error
rtdocs\api.python.langchain.com\en\latest\cache\langchain.cache.MomentoCache.html
6cf0ed3dea34-1
Clear the cache. Raises SdkException – Momento service or network error classmethod from_client_params(cache_name: str, ttl: timedelta, *, configuration: Optional[momento.config.Configuration] = None, auth_token: Optional[str] = None, **kwargs: Any) → MomentoCache[source]¶ Construct cache from CacheClient parameters. lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶ Lookup llm generations in cache by prompt and associated model and settings. Parameters prompt (str) – The prompt run through the language model. llm_string (str) – The language model version and settings. Raises SdkException – Momento service or network error Returns A list of language model generations. Return type Optional[RETURN_VAL_TYPE] update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶ Store llm generations in cache. Parameters prompt (str) – The prompt run through the language model. llm_string (str) – The language model string. return_val (RETURN_VAL_TYPE) – A list of language model generations. Raises SdkException – Momento service or network error Exception – Unexpected response
rtdocs\api.python.langchain.com\en\latest\cache\langchain.cache.MomentoCache.html
13804a391524-0
langchain.cache.RedisCache¶ class langchain.cache.RedisCache(redis_: Any)[source]¶ Bases: BaseCache Cache that uses Redis as a backend. Initialize by passing in Redis instance. Methods __init__(redis_) Initialize by passing in Redis instance. clear(**kwargs) Clear cache. lookup(prompt, llm_string) Look up based on prompt and llm_string. update(prompt, llm_string, return_val) Update cache based on prompt and llm_string. clear(**kwargs: Any) → None[source]¶ Clear cache. If asynchronous is True, flush asynchronously. lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶ Look up based on prompt and llm_string. update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶ Update cache based on prompt and llm_string.
rtdocs\api.python.langchain.com\en\latest\cache\langchain.cache.RedisCache.html
7cda7c234a0f-0
langchain.cache.RedisSemanticCache¶ class langchain.cache.RedisSemanticCache(redis_url: str, embedding: Embeddings, score_threshold: float = 0.2)[source]¶ Bases: BaseCache Cache that uses Redis as a vector-store backend. Initialize by passing in the init GPTCache func Parameters redis_url (str) – URL to connect to Redis. embedding (Embedding) – Embedding provider for semantic encoding and search. score_threshold (float, 0.2) – Example: import langchain from langchain.cache import RedisSemanticCache from langchain.embeddings import OpenAIEmbeddings langchain.llm_cache = RedisSemanticCache( redis_url="redis://localhost:6379", embedding=OpenAIEmbeddings() ) Methods __init__(redis_url, embedding[, score_threshold]) Initialize by passing in the init GPTCache func clear(**kwargs) Clear semantic cache for a given llm_string. lookup(prompt, llm_string) Look up based on prompt and llm_string. update(prompt, llm_string, return_val) Update cache based on prompt and llm_string. clear(**kwargs: Any) → None[source]¶ Clear semantic cache for a given llm_string. lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶ Look up based on prompt and llm_string. update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶ Update cache based on prompt and llm_string.
rtdocs\api.python.langchain.com\en\latest\cache\langchain.cache.RedisSemanticCache.html
00eeffae606a-0
langchain.cache.SQLAlchemyCache¶ class langchain.cache.SQLAlchemyCache(engine: ~sqlalchemy.engine.base.Engine, cache_schema: ~typing.Type[~langchain.cache.FullLLMCache] = <class 'langchain.cache.FullLLMCache'>)[source]¶ Bases: BaseCache Cache that uses SQAlchemy as a backend. Initialize by creating all tables. Methods __init__(engine[, cache_schema]) Initialize by creating all tables. clear(**kwargs) Clear cache. lookup(prompt, llm_string) Look up based on prompt and llm_string. update(prompt, llm_string, return_val) Update based on prompt and llm_string. clear(**kwargs: Any) → None[source]¶ Clear cache. lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶ Look up based on prompt and llm_string. update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶ Update based on prompt and llm_string.
rtdocs\api.python.langchain.com\en\latest\cache\langchain.cache.SQLAlchemyCache.html
ffc43c922893-0
langchain.cache.SQLiteCache¶ class langchain.cache.SQLiteCache(database_path: str = '.langchain.db')[source]¶ Bases: SQLAlchemyCache Cache that uses SQLite as a backend. Initialize by creating the engine and all tables. Methods __init__([database_path]) Initialize by creating the engine and all tables. clear(**kwargs) Clear cache. lookup(prompt, llm_string) Look up based on prompt and llm_string. update(prompt, llm_string, return_val) Update based on prompt and llm_string. clear(**kwargs: Any) → None¶ Clear cache. lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]]¶ Look up based on prompt and llm_string. update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None¶ Update based on prompt and llm_string.
rtdocs\api.python.langchain.com\en\latest\cache\langchain.cache.SQLiteCache.html
fa84845a3b1f-0
langchain.callbacks.aim_callback.AimCallbackHandler¶ class langchain.callbacks.aim_callback.AimCallbackHandler(repo: Optional[str] = None, experiment_name: Optional[str] = None, system_tracking_interval: Optional[int] = 10, log_system_params: bool = True)[source]¶ Bases: BaseMetadataCallbackHandler, BaseCallbackHandler Callback Handler that logs to Aim. Parameters repo (str, optional) – Aim repository path or Repo object to which Run object is bound. If skipped, default Repo is used. experiment_name (str, optional) – Sets Run’s experiment property. ‘default’ if not specified. Can be used later to query runs/sequences. system_tracking_interval (int, optional) – Sets the tracking interval in seconds for system usage metrics (CPU, Memory, etc.). Set to None to disable system metrics tracking. log_system_params (bool, optional) – Enable/Disable logging of system params such as installed packages, git info, environment variables, etc. This handler will utilize the associated callback method called and formats the input of each callback function with metadata regarding the state of LLM run and then logs the response to Aim. Initialize callback handler. Methods __init__([repo, experiment_name, ...]) Initialize callback handler. flush_tracker([repo, experiment_name, ...]) Flush the tracker and reset the session. get_custom_callback_meta() on_agent_action(action, **kwargs) Run on agent action. on_agent_finish(finish, **kwargs) Run when agent ends running. on_chain_end(outputs, **kwargs) Run when chain ends running. on_chain_error(error, **kwargs) Run when chain errors. on_chain_start(serialized, inputs, **kwargs) Run when chain starts running.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.aim_callback.AimCallbackHandler.html
fa84845a3b1f-1
Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Run when LLM errors. on_llm_new_token(token, **kwargs) Run when LLM generates a new token. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts. on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, **kwargs) Run when agent is ending. on_tool_end(output, **kwargs) Run when tool ends running. on_tool_error(error, **kwargs) Run when tool errors. on_tool_start(serialized, input_str, **kwargs) Run when tool starts running. reset_callback_meta() Reset the callback metadata. setup(**kwargs) Attributes always_verbose Whether to call verbose callbacks even if verbose is False. ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.aim_callback.AimCallbackHandler.html
fa84845a3b1f-2
ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline flush_tracker(repo: Optional[str] = None, experiment_name: Optional[str] = None, system_tracking_interval: Optional[int] = 10, log_system_params: bool = True, langchain_asset: Any = None, reset: bool = True, finish: bool = False) → None[source]¶ Flush the tracker and reset the session. Parameters repo (str, optional) – Aim repository path or Repo object to which Run object is bound. If skipped, default Repo is used. experiment_name (str, optional) – Sets Run’s experiment property. ‘default’ if not specified. Can be used later to query runs/sequences. system_tracking_interval (int, optional) – Sets the tracking interval in seconds for system usage metrics (CPU, Memory, etc.). Set to None to disable system metrics tracking. log_system_params (bool, optional) – Enable/Disable logging of system params such as installed packages, git info, environment variables, etc. langchain_asset – The langchain asset to save. reset – Whether to reset the session. finish – Whether to finish the run. Returns – None get_custom_callback_meta() → Dict[str, Any]¶ on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶ Run on agent action. on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶ Run when agent ends running. on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain ends running. on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when chain errors.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.aim_callback.AimCallbackHandler.html
fa84845a3b1f-3
Run when chain errors. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain starts running. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when LLM errors. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Run when LLM generates a new token. on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Run when LLM starts. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.aim_callback.AimCallbackHandler.html
fa84845a3b1f-4
Run when Retriever starts running. on_text(text: str, **kwargs: Any) → None[source]¶ Run when agent is ending. on_tool_end(output: str, **kwargs: Any) → None[source]¶ Run when tool ends running. on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when tool errors. on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶ Run when tool starts running. reset_callback_meta() → None¶ Reset the callback metadata. setup(**kwargs: Any) → None[source]¶ property always_verbose: bool¶ Whether to call verbose callbacks even if verbose is False. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.aim_callback.AimCallbackHandler.html
2b1755b7a707-0
langchain.callbacks.aim_callback.import_aim¶ langchain.callbacks.aim_callback.import_aim() → Any[source]¶ Import the aim python package and raise an error if it is not installed.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.aim_callback.import_aim.html
ef45d81ff78c-0
langchain.callbacks.argilla_callback.ArgillaCallbackHandler¶ class langchain.callbacks.argilla_callback.ArgillaCallbackHandler(dataset_name: str, workspace_name: Optional[str] = None, api_url: Optional[str] = None, api_key: Optional[str] = None)[source]¶ Bases: BaseCallbackHandler Callback Handler that logs into Argilla. Parameters dataset_name – name of the FeedbackDataset in Argilla. Note that it must exist in advance. If you need help on how to create a FeedbackDataset in Argilla, please visit https://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html. workspace_name – name of the workspace in Argilla where the specified FeedbackDataset lives in. Defaults to None, which means that the default workspace will be used. api_url – URL of the Argilla Server that we want to use, and where the FeedbackDataset lives in. Defaults to None, which means that either ARGILLA_API_URL environment variable or the default http://localhost:6900 will be used. api_key – API Key to connect to the Argilla Server. Defaults to None, which means that either ARGILLA_API_KEY environment variable or the default argilla.apikey will be used. Raises ImportError – if the argilla package is not installed. ConnectionError – if the connection to Argilla fails. FileNotFoundError – if the FeedbackDataset retrieval from Argilla fails. Examples >>> from langchain.llms import OpenAI >>> from langchain.callbacks import ArgillaCallbackHandler >>> argilla_callback = ArgillaCallbackHandler( ... dataset_name="my-dataset", ... workspace_name="my-workspace", ... api_url="http://localhost:6900", ... api_key="argilla.apikey", ... )
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.argilla_callback.ArgillaCallbackHandler.html
ef45d81ff78c-1
... api_key="argilla.apikey", ... ) >>> llm = OpenAI( ... temperature=0, ... callbacks=[argilla_callback], ... verbose=True, ... openai_api_key="API_KEY_HERE", ... ) >>> llm.generate([ ... "What is the best NLP-annotation tool out there? (no bias at all)", ... ]) "Argilla, no doubt about it." Initializes the ArgillaCallbackHandler. Parameters dataset_name – name of the FeedbackDataset in Argilla. Note that it must exist in advance. If you need help on how to create a FeedbackDataset in Argilla, please visit https://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html. workspace_name – name of the workspace in Argilla where the specified FeedbackDataset lives in. Defaults to None, which means that the default workspace will be used. api_url – URL of the Argilla Server that we want to use, and where the FeedbackDataset lives in. Defaults to None, which means that either ARGILLA_API_URL environment variable or the default http://localhost:6900 will be used. api_key – API Key to connect to the Argilla Server. Defaults to None, which means that either ARGILLA_API_KEY environment variable or the default argilla.apikey will be used. Raises ImportError – if the argilla package is not installed. ConnectionError – if the connection to Argilla fails. FileNotFoundError – if the FeedbackDataset retrieval from Argilla fails. Methods __init__(dataset_name[, workspace_name, ...]) Initializes the ArgillaCallbackHandler. on_agent_action(action, **kwargs) Do nothing when agent takes a specific action.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.argilla_callback.ArgillaCallbackHandler.html
ef45d81ff78c-2
on_agent_action(action, **kwargs) Do nothing when agent takes a specific action. on_agent_finish(finish, **kwargs) Do nothing on_chain_end(outputs, **kwargs) If either the parent_run_id or the run_id is in self.prompts, then log the outputs to Argilla, and pop the run from self.prompts. on_chain_error(error, **kwargs) Do nothing when LLM chain outputs an error. on_chain_start(serialized, inputs, **kwargs) If the key input is in inputs, then save it in self.prompts using either the parent_run_id or the run_id as the key. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Log records to Argilla when an LLM ends. on_llm_error(error, **kwargs) Do nothing when LLM outputs an error. on_llm_new_token(token, **kwargs) Do nothing when a new token is generated. on_llm_start(serialized, prompts, **kwargs) Save the prompts in memory when an LLM starts. on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, **kwargs) Do nothing on_tool_end(output[, observation_prefix, ...]) Do nothing when tool ends. on_tool_error(error, **kwargs) Do nothing when tool outputs an error.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.argilla_callback.ArgillaCallbackHandler.html
ef45d81ff78c-3
on_tool_error(error, **kwargs) Do nothing when tool outputs an error. on_tool_start(serialized, input_str, **kwargs) Do nothing when tool starts. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶ Do nothing when agent takes a specific action. on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶ Do nothing on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ If either the parent_run_id or the run_id is in self.prompts, then log the outputs to Argilla, and pop the run from self.prompts. The behavior differs if the output is a list or not. on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Do nothing when LLM chain outputs an error. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶ If the key input is in inputs, then save it in self.prompts using either the parent_run_id or the run_id as the key. This is done so that we don’t log the same input prompt twice, once when the LLM starts and once when the chain starts.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.argilla_callback.ArgillaCallbackHandler.html
ef45d81ff78c-4
when the chain starts. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Log records to Argilla when an LLM ends. on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Do nothing when LLM outputs an error. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Do nothing when a new token is generated. on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Save the prompts in memory when an LLM starts. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when Retriever starts running.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.argilla_callback.ArgillaCallbackHandler.html
ef45d81ff78c-5
Run when Retriever starts running. on_text(text: str, **kwargs: Any) → None[source]¶ Do nothing on_tool_end(output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶ Do nothing when tool ends. on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Do nothing when tool outputs an error. on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶ Do nothing when tool starts. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.argilla_callback.ArgillaCallbackHandler.html
3f190938e071-0
langchain.callbacks.arize_callback.ArizeCallbackHandler¶ class langchain.callbacks.arize_callback.ArizeCallbackHandler(model_id: Optional[str] = None, model_version: Optional[str] = None, SPACE_KEY: Optional[str] = None, API_KEY: Optional[str] = None)[source]¶ Bases: BaseCallbackHandler Callback Handler that logs to Arize. Initialize callback handler. Methods __init__([model_id, model_version, ...]) Initialize callback handler. on_agent_action(action, **kwargs) Do nothing. on_agent_finish(finish, **kwargs) Run on agent end. on_chain_end(outputs, **kwargs) Do nothing. on_chain_error(error, **kwargs) Do nothing. on_chain_start(serialized, inputs, **kwargs) Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Do nothing. on_llm_new_token(token, **kwargs) Do nothing. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts running. on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, **kwargs) Run on arbitrary text. on_tool_end(output[, observation_prefix, ...]) Run when tool ends running.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.arize_callback.ArizeCallbackHandler.html
3f190938e071-1
on_tool_end(output[, observation_prefix, ...]) Run when tool ends running. on_tool_error(error, **kwargs) Run when tool errors. on_tool_start(serialized, input_str, **kwargs) Run when tool starts running. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶ Do nothing. on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Do nothing. on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Do nothing. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain starts running. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Do nothing.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.arize_callback.ArizeCallbackHandler.html
3f190938e071-2
Do nothing. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Do nothing. on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Run when LLM starts running. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when Retriever starts running. on_text(text: str, **kwargs: Any) → None[source]¶ Run on arbitrary text. on_tool_end(output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶ Run when tool ends running. on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when tool errors. on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶ Run when tool starts running. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.arize_callback.ArizeCallbackHandler.html
3f190938e071-3
Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.arize_callback.ArizeCallbackHandler.html
9ec2a64c0a23-0
langchain.callbacks.arthur_callback.ArthurCallbackHandler¶ class langchain.callbacks.arthur_callback.ArthurCallbackHandler(arthur_model: ArthurModel)[source]¶ Bases: BaseCallbackHandler Callback Handler that logs to Arthur platform. Arthur helps enterprise teams optimize model operations and performance at scale. The Arthur API tracks model performance, explainability, and fairness across tabular, NLP, and CV models. Our API is model- and platform-agnostic, and continuously scales with complex and dynamic enterprise needs. To learn more about Arthur, visit our website at https://www.arthur.ai/ or read the Arthur docs at https://docs.arthur.ai/ Initialize callback handler. Methods __init__(arthur_model) Initialize callback handler. from_credentials(model_id[, arthur_url, ...]) Initialize callback handler from Arthur credentials. on_agent_action(action, **kwargs) Do nothing when agent takes a specific action. on_agent_finish(finish, **kwargs) Do nothing on_chain_end(outputs, **kwargs) On chain end, do nothing. on_chain_error(error, **kwargs) Do nothing when LLM chain outputs an error. on_chain_start(serialized, inputs, **kwargs) On chain start, do nothing. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) On LLM end, send data to Arthur. on_llm_error(error, **kwargs) Do nothing when LLM outputs an error. on_llm_new_token(token, **kwargs) On new token, pass. on_llm_start(serialized, prompts, **kwargs) On LLM start, save the input prompts
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.arthur_callback.ArthurCallbackHandler.html
9ec2a64c0a23-1
On LLM start, save the input prompts on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, **kwargs) Do nothing on_tool_end(output[, observation_prefix, ...]) Do nothing when tool ends. on_tool_error(error, **kwargs) Do nothing when tool outputs an error. on_tool_start(serialized, input_str, **kwargs) Do nothing when tool starts. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline classmethod from_credentials(model_id: str, arthur_url: Optional[str] = 'https://app.arthur.ai', arthur_login: Optional[str] = None, arthur_password: Optional[str] = None) → ArthurCallbackHandler[source]¶ Initialize callback handler from Arthur credentials. Parameters model_id (str) – The ID of the arthur model to log to. arthur_url (str, optional) – The URL of the Arthur instance to log to. Defaults to “https://app.arthur.ai”. arthur_login (str, optional) – The login to use to connect to Arthur. Defaults to None. arthur_password (str, optional) – The password to use to connect to Arthur. Defaults to None. Returns The initialized callback handler. Return type
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.arthur_callback.ArthurCallbackHandler.html
9ec2a64c0a23-2
Arthur. Defaults to None. Returns The initialized callback handler. Return type ArthurCallbackHandler on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶ Do nothing when agent takes a specific action. on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶ Do nothing on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ On chain end, do nothing. on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Do nothing when LLM chain outputs an error. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶ On chain start, do nothing. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ On LLM end, send data to Arthur. on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Do nothing when LLM outputs an error. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ On new token, pass. on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ On LLM start, save the input prompts
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.arthur_callback.ArthurCallbackHandler.html
9ec2a64c0a23-3
On LLM start, save the input prompts on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when Retriever starts running. on_text(text: str, **kwargs: Any) → None[source]¶ Do nothing on_tool_end(output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶ Do nothing when tool ends. on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Do nothing when tool outputs an error. on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶ Do nothing when tool starts. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.arthur_callback.ArthurCallbackHandler.html
befef67605b3-0
langchain.callbacks.base.AsyncCallbackHandler¶ class langchain.callbacks.base.AsyncCallbackHandler[source]¶ Bases: BaseCallbackHandler Async callback handler that can be used to handle callbacks from langchain. Methods __init__() on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id[, ...]) Run when chain ends running. on_chain_error(error, *, run_id[, ...]) Run when chain errors. on_chain_start(serialized, inputs, *, run_id) Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id[, ...]) Run when LLM ends running. on_llm_error(error, *, run_id[, ...]) Run when LLM errors. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Run when LLM starts running. on_retriever_end(documents, *, run_id[, ...]) Run on retriever end. on_retriever_error(error, *, run_id[, ...]) Run on retriever error. on_retriever_start(serialized, query, *, run_id) Run on retriever start. on_text(text, *, run_id[, parent_run_id, tags]) Run on arbitrary text. on_tool_end(output, *, run_id[, ...]) Run when tool ends running.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.base.AsyncCallbackHandler.html
befef67605b3-1
Run when tool ends running. on_tool_error(error, *, run_id[, ...]) Run when tool errors. on_tool_start(serialized, input_str, *, run_id) Run when tool starts running. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline async on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶ Run on agent action. async on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶ Run on agent end. async on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶ Run when chain ends running. async on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶ Run when chain errors. async on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.base.AsyncCallbackHandler.html
befef67605b3-2
Run when chain starts running. async on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any[source]¶ Run when a chat model starts running. async on_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶ Run when LLM ends running. async on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶ Run when LLM errors. async on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶ Run on new LLM token. Only available when streaming is enabled. async on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶ Run when LLM starts running. async on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶ Run on retriever end.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.base.AsyncCallbackHandler.html
befef67605b3-3
Run on retriever end. async on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶ Run on retriever error. async on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶ Run on retriever start. async on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶ Run on arbitrary text. async on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶ Run when tool ends running. async on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶ Run when tool errors. async on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶ Run when tool starts running. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.base.AsyncCallbackHandler.html