id
stringlengths
14
15
text
stringlengths
22
2.51k
source
stringlengths
60
153
bbb087daf476-1
dataset_id (Optional[str]) – WhyLabs dataset id to write profiles to. Optional because the preferred way to specify the dataset id is with environment variable WHYLABS_DEFAULT_DATASET_ID. sentiment (bool) – Whether to enable sentiment analysis. Defaults to False. toxicity (bool) – Whether to enable toxicity analysis. Defaults to False. themes (bool) – Whether to enable theme analysis. Defaults to False. Initiate the rolling logger Methods __init__(logger) Initiate the rolling logger close() flush() from_params(*[, api_key, org_id, ...]) Instantiate whylogs Logger from params. on_agent_action(action[, color]) Do nothing. on_agent_finish(finish[, color]) Run on agent end. on_chain_end(outputs, **kwargs) Do nothing. on_chain_error(error, **kwargs) Do nothing. on_chain_start(serialized, inputs, **kwargs) Do nothing. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Pass the generated response to the logger. on_llm_error(error, **kwargs) Do nothing. on_llm_new_token(token, **kwargs) Do nothing. on_llm_start(serialized, prompts, **kwargs) Pass the input prompts to the logger on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.whylabs_callback.WhyLabsCallbackHandler.html
bbb087daf476-2
Run when Retriever starts running. on_text(text, **kwargs) Do nothing. on_tool_end(output[, color, ...]) Do nothing. on_tool_error(error, **kwargs) Do nothing. on_tool_start(serialized, input_str, **kwargs) Do nothing. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline close() → None[source]¶ flush() → None[source]¶ classmethod from_params(*, api_key: Optional[str] = None, org_id: Optional[str] = None, dataset_id: Optional[str] = None, sentiment: bool = False, toxicity: bool = False, themes: bool = False) → Logger[source]¶ Instantiate whylogs Logger from params. Parameters api_key (Optional[str]) – WhyLabs API key. Optional because the preferred way to specify the API key is with environment variable WHYLABS_API_KEY. org_id (Optional[str]) – WhyLabs organization id to write profiles to. If not set must be specified in environment variable WHYLABS_DEFAULT_ORG_ID. dataset_id (Optional[str]) – The model or dataset this callback is gathering telemetry for. If not set must be specified in environment variable WHYLABS_DEFAULT_DATASET_ID. sentiment (bool) – If True will initialize a model to perform sentiment analysis compound score. Defaults to False and will not gather this metric. toxicity (bool) – If True will initialize a model to score toxicity. Defaults to False and will not gather this metric.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.whylabs_callback.WhyLabsCallbackHandler.html
bbb087daf476-3
toxicity. Defaults to False and will not gather this metric. themes (bool) – If True will initialize a model to calculate distance to configured themes. Defaults to None and will not gather this metric. on_agent_action(action: AgentAction, color: Optional[str] = None, **kwargs: Any) → Any[source]¶ Do nothing. on_agent_finish(finish: AgentFinish, color: Optional[str] = None, **kwargs: Any) → None[source]¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Do nothing. on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Do nothing. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Do nothing. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Pass the generated response to the logger. on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Do nothing. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Do nothing. on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Pass the input prompts to the logger
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.whylabs_callback.WhyLabsCallbackHandler.html
bbb087daf476-4
Pass the input prompts to the logger on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when Retriever starts running. on_text(text: str, **kwargs: Any) → None[source]¶ Do nothing. on_tool_end(output: str, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶ Do nothing. on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Do nothing. on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶ Do nothing. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.whylabs_callback.WhyLabsCallbackHandler.html
93c82b08da84-0
langchain.chains.api.base.APIChain¶ class langchain.chains.api.base.APIChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, api_request_chain: LLMChain, api_answer_chain: LLMChain, requests_wrapper: TextRequestsWrapper, api_docs: str, question_key: str = 'question', output_key: str = 'output')[source]¶ Bases: Chain Chain that makes API calls and summarizes the responses to answer a question. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api_answer_chain: LLMChain [Required]¶ param api_docs: str [Required]¶ param api_request_chain: LLMChain [Required]¶ param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.base.APIChain.html
93c82b08da84-1
There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param requests_wrapper: TextRequestsWrapper [Required]¶ param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.base.APIChain.html
93c82b08da84-2
chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.base.APIChain.html
93c82b08da84-3
tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.base.APIChain.html
93c82b08da84-4
these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example ..code-block:: python chain.dict(exclude_unset=True) # -> {“_type”: “foo”, “verbose”: False, …}
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.base.APIChain.html
93c82b08da84-5
# -> {“_type”: “foo”, “verbose”: False, …} classmethod from_llm_and_api_docs(llm: BaseLanguageModel, api_docs: str, headers: Optional[dict] = None, api_url_prompt: BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url:', template_format='f-string', validate_template=True), api_response_prompt: BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question', 'api_url', 'api_response'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url: {api_url}\n\nHere is the response from the API:\n\n{api_response}\n\nSummarize this response to answer the original question.\n\nSummary:', template_format='f-string', validate_template=True), **kwargs: Any) → APIChain[source]¶ Load chain from just an LLM and the api docs. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.base.APIChain.html
93c82b08da84-6
Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. validator raise_callback_manager_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.base.APIChain.html
93c82b08da84-7
a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") validator set_verbose  »  verbose¶ Set the chain verbosity. Defaults to the global setting if not specified by the user. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.base.APIChain.html
93c82b08da84-8
to_json_not_implemented() → SerializedNotImplemented¶ validator validate_api_answer_prompt  »  all fields[source]¶ Check that api answer prompt expects the right variables. validator validate_api_request_prompt  »  all fields[source]¶ Check that api request prompt expects the right variables. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.base.APIChain.html
8de28eb1935f-0
langchain.chains.api.openapi.chain.OpenAPIEndpointChain¶ class langchain.chains.api.openapi.chain.OpenAPIEndpointChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, api_request_chain: LLMChain, api_response_chain: Optional[LLMChain] = None, api_operation: APIOperation, requests: Requests = None, param_mapping: _ParamMapping, return_intermediate_steps: bool = False, instructions_key: str = 'instructions', output_key: str = 'output', max_text_length: Optional[ConstrainedIntValue] = None)[source]¶ Bases: Chain, BaseModel Chain interacts with an OpenAPI endpoint using natural language. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api_operation: APIOperation [Required]¶ param api_request_chain: LLMChain [Required]¶ param api_response_chain: Optional[LLMChain] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html
8de28eb1935f-1
Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param param_mapping: _ParamMapping [Required]¶ param requests: Requests [Optional]¶ param return_intermediate_steps: bool = False¶ param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html
8de28eb1935f-2
only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html
8de28eb1935f-3
chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html
8de28eb1935f-4
sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." deserialize_json_input(serialized_args: str) → dict[source]¶ Use the serialized typescript dictionary. Resolve the path, query params dict, and optional requestBody dict. dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example ..code-block:: python chain.dict(exclude_unset=True) # -> {“_type”: “foo”, “verbose”: False, …}
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html
8de28eb1935f-5
# -> {“_type”: “foo”, “verbose”: False, …} classmethod from_api_operation(operation: APIOperation, llm: BaseLanguageModel, requests: Optional[Requests] = None, verbose: bool = False, return_intermediate_steps: bool = False, raw_response: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → OpenAPIEndpointChain[source]¶ Create an OpenAPIEndpointChain from an operation and a spec. classmethod from_url_and_method(spec_url: str, path: str, method: str, llm: BaseLanguageModel, requests: Optional[Requests] = None, return_intermediate_steps: bool = False, **kwargs: Any) → OpenAPIEndpointChain[source]¶ Create an OpenAPIEndpoint from a spec at the specified url. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html
8de28eb1935f-6
Returns A dict of the final chain outputs. validator raise_callback_manager_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..."
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html
8de28eb1935f-7
# -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") validator set_verbose  »  verbose¶ Set the chain verbosity. Defaults to the global setting if not specified by the user. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html
57c74c3a92d0-0
langchain.chains.api.openapi.requests_chain.APIRequesterChain¶ class langchain.chains.api.openapi.requests_chain.APIRequesterChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]¶ Bases: LLMChain Get the request parser. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param llm: BaseLanguageModel [Required]¶ Language model to call. param llm_kwargs: dict [Optional]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.requests_chain.APIRequesterChain.html
57c74c3a92d0-1
There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param output_key: str = 'text'¶ param output_parser: BaseLLMOutputParser [Optional]¶ Output parser to use. Defaults to one that takes the most likely string but does not change it otherwise. param prompt: BasePromptTemplate [Required]¶ Prompt object to use. param return_final_only: bool = True¶ Whether to return only the final parsed result. Defaults to True. If false, will return a bunch of extra information about the generation. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.requests_chain.APIRequesterChain.html
57c74c3a92d0-2
Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.requests_chain.APIRequesterChain.html
57c74c3a92d0-3
Call apply and then parse the results. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.requests_chain.APIRequesterChain.html
57c74c3a92d0-4
Generate LLM result from inputs. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶ Call apredict and then parse the results. async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.requests_chain.APIRequesterChain.html
57c74c3a92d0-5
Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..."
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.requests_chain.APIRequesterChain.html
57c74c3a92d0-6
# -> "The temperature in Boise is..." create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶ Create outputs from response. dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example ..code-block:: python chain.dict(exclude_unset=True) # -> {“_type”: “foo”, “verbose”: False, …} classmethod from_llm_and_typescript(llm: BaseLanguageModel, typescript_definition: str, verbose: bool = True, **kwargs: Any) → LLMChain[source]¶ Get the request parser. classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶ Create LLMChain from LLM and template. generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶ Call predict and then parse the results.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.requests_chain.APIRequesterChain.html
57c74c3a92d0-7
Call predict and then parse the results. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. validator raise_callback_manager_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.requests_chain.APIRequesterChain.html
57c74c3a92d0-8
has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml")
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.requests_chain.APIRequesterChain.html
57c74c3a92d0-9
Example chain.save(file_path="path/chain.yaml") validator set_verbose  »  verbose¶ Set the chain verbosity. Defaults to the global setting if not specified by the user. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.requests_chain.APIRequesterChain.html
daf8c8e64c24-0
langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser¶ class langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser[source]¶ Bases: BaseOutputParser Parse the request and error tags. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. parse(llm_output: str) → str[source]¶ Parse the request and error tags. parse_result(result: List[Generation]) → T¶ Parse a list of candidate model Generations into a specific format. The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation. Parameters result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input. Returns Structured output. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Parse the output of an LLM call with the input prompt for context. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – String output of language model. prompt – Input PromptValue. Returns Structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser.html
daf8c8e64c24-1
property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.requests_chain.APIRequesterOutputParser.html
835cfcca95c3-0
langchain.chains.api.openapi.response_chain.APIResponderChain¶ class langchain.chains.api.openapi.response_chain.APIResponderChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]¶ Bases: LLMChain Get the response parser. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param llm: BaseLanguageModel [Required]¶ Language model to call. param llm_kwargs: dict [Optional]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.response_chain.APIResponderChain.html
835cfcca95c3-1
There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param output_key: str = 'text'¶ param output_parser: BaseLLMOutputParser [Optional]¶ Output parser to use. Defaults to one that takes the most likely string but does not change it otherwise. param prompt: BasePromptTemplate [Required]¶ Prompt object to use. param return_final_only: bool = True¶ Whether to return only the final parsed result. Defaults to True. If false, will return a bunch of extra information about the generation. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.response_chain.APIResponderChain.html
835cfcca95c3-2
Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.response_chain.APIResponderChain.html
835cfcca95c3-3
Call apply and then parse the results. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.response_chain.APIResponderChain.html
835cfcca95c3-4
Generate LLM result from inputs. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶ Call apredict and then parse the results. async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.response_chain.APIResponderChain.html
835cfcca95c3-5
Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..."
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.response_chain.APIResponderChain.html
835cfcca95c3-6
# -> "The temperature in Boise is..." create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶ Create outputs from response. dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example ..code-block:: python chain.dict(exclude_unset=True) # -> {“_type”: “foo”, “verbose”: False, …} classmethod from_llm(llm: BaseLanguageModel, verbose: bool = True, **kwargs: Any) → LLMChain[source]¶ Get the response parser. classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶ Create LLMChain from LLM and template. generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶ Call predict and then parse the results. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.response_chain.APIResponderChain.html
835cfcca95c3-7
Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. validator raise_callback_manager_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.response_chain.APIResponderChain.html
835cfcca95c3-8
info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") validator set_verbose  »  verbose¶ Set the chain verbosity.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.response_chain.APIResponderChain.html
835cfcca95c3-9
validator set_verbose  »  verbose¶ Set the chain verbosity. Defaults to the global setting if not specified by the user. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.response_chain.APIResponderChain.html
c338f1218e15-0
langchain.chains.api.openapi.response_chain.APIResponderOutputParser¶ class langchain.chains.api.openapi.response_chain.APIResponderOutputParser[source]¶ Bases: BaseOutputParser Parse the response and error tags. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. parse(llm_output: str) → str[source]¶ Parse the response and error tags. parse_result(result: List[Generation]) → T¶ Parse a list of candidate model Generations into a specific format. The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation. Parameters result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input. Returns Structured output. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Parse the output of an LLM call with the input prompt for context. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – String output of language model. prompt – Input PromptValue. Returns Structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.response_chain.APIResponderOutputParser.html
c338f1218e15-1
property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.api.openapi.response_chain.APIResponderOutputParser.html
eeea65d65153-0
langchain.chains.base.Chain¶ class langchain.chains.base.Chain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None)[source]¶ Bases: Serializable, ABC Abstract base class for creating structured sequences of calls to components. Chains should be used to encode a sequence of calls to components like models, document retrievers, other chains, etc., and provide a simple interface to this sequence. The Chain interface makes it easy to create apps that are: Stateful: add Memory to any Chain to give it state, Observable: pass Callbacks to a Chain to execute additional functionality,like logging, outside the main sequence of component calls, Composable: the Chain API is flexible enough that it is easy to combineChains with other components, including other Chains. The main methods exposed by chains are: __call__: Chains are callable. The __call__ method is the primary way toexecute a Chain. This takes inputs as a dictionary and returns a dictionary output. run: A convenience method that takes inputs as args/kwargs and returns theoutput as a string. This method can only be used for a subset of chains and cannot return as rich of an output as __call__. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.base.Chain.html
eeea65d65153-1
Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param memory: Optional[langchain.schema.memory.BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.base.Chain.html
eeea65d65153-2
will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any][source]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.base.Chain.html
eeea65d65153-3
Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any][source]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]][source]¶ Call the chain on all inputs in the list.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.base.Chain.html
eeea65d65153-4
Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str[source]¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string:
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.base.Chain.html
eeea65d65153-5
# and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." dict(**kwargs: Any) → Dict[source]¶ Return dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example ..code-block:: python chain.dict(exclude_unset=True) # -> {“_type”: “foo”, “verbose”: False, …} prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str][source]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str][source]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. validator raise_callback_manager_deprecation  »  all fields[source]¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.base.Chain.html
eeea65d65153-6
validator raise_callback_manager_deprecation  »  all fields[source]¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str[source]¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..."
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.base.Chain.html
eeea65d65153-7
# -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None[source]¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") validator set_verbose  »  verbose[source]¶ Set the chain verbosity. Defaults to the global setting if not specified by the user. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ abstract property input_keys: List[str]¶ Return the keys expected to be in the chain input. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. abstract property output_keys: List[str]¶ Return the keys expected to be in the chain output. model Config[source]¶ Bases: object
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.base.Chain.html
eeea65d65153-8
model Config[source]¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.base.Chain.html
8045895c16ef-0
langchain.chains.combine_documents.base.AnalyzeDocumentChain¶ class langchain.chains.combine_documents.base.AnalyzeDocumentChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_document', text_splitter: TextSplitter = None, combine_docs_chain: BaseCombineDocumentsChain)[source]¶ Bases: Chain Chain that splits documents, then analyzes it in pieces. This chain is parameterized by a TextSplitter and a CombineDocumentsChain. This chain takes a single document as input, and then splits it up into chunks and then passes those chucks to the CombineDocumentsChain. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.AnalyzeDocumentChain.html
8045895c16ef-1
and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param text_splitter: langchain.text_splitter.TextSplitter [Optional]¶ param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.AnalyzeDocumentChain.html
8045895c16ef-2
memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.AnalyzeDocumentChain.html
8045895c16ef-3
callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.AnalyzeDocumentChain.html
8045895c16ef-4
sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example ..code-block:: python chain.dict(exclude_unset=True) # -> {“_type”: “foo”, “verbose”: False, …} prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.AnalyzeDocumentChain.html
8045895c16ef-5
Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. validator raise_callback_manager_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.AnalyzeDocumentChain.html
8045895c16ef-6
sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") validator set_verbose  »  verbose¶ Set the chain verbosity. Defaults to the global setting if not specified by the user. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.AnalyzeDocumentChain.html
8045895c16ef-7
serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.AnalyzeDocumentChain.html
999ef90924dd-0
langchain.chains.combine_documents.base.BaseCombineDocumentsChain¶ class langchain.chains.combine_documents.base.BaseCombineDocumentsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_documents', output_key: str = 'output_text')[source]¶ Bases: Chain, ABC Base interface for chains combining documents. Subclasses of this chain deal with combining documents in a variety of ways. This base class exists to add some uniformity in the interface these types of chains should expose. Namely, they expect an input key related to the documents to use (default input_documents), and then also expose a method to calculate the length of a prompt from documents (useful for outside callers to use to determine whether it’s safe to pass a list of documents into this chain or whether that will longer than the context length). Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
999ef90924dd-1
Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
999ef90924dd-2
memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
999ef90924dd-3
callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. abstract async acombine_docs(docs: List[Document], **kwargs: Any) → Tuple[str, dict][source]¶ Combine documents into a single string. Parameters docs – List[Document], the documents to combine **kwargs – Other parameters to use in combining documents, often other inputs to the prompt. Returns The first element returned is the single string output. The second element returned is a dictionary of other keys to return. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
999ef90924dd-4
has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." abstract combine_docs(docs: List[Document], **kwargs: Any) → Tuple[str, dict][source]¶ Combine documents into a single string. Parameters docs – List[Document], the documents to combine **kwargs – Other parameters to use in combining documents, often
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
999ef90924dd-5
**kwargs – Other parameters to use in combining documents, often other inputs to the prompt. Returns The first element returned is the single string output. The second element returned is a dictionary of other keys to return. dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example ..code-block:: python chain.dict(exclude_unset=True) # -> {“_type”: “foo”, “verbose”: False, …} prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. prompt_length(docs: List[Document], **kwargs: Any) → Optional[int][source]¶ Return the prompt length given the documents passed in.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
999ef90924dd-6
Return the prompt length given the documents passed in. This can be used by a caller to determine whether passing in a list of documents would exceed a certain prompt length. This useful when trying to ensure that the size of a prompt remains below a certain context limit. Parameters docs – List[Document], a list of documents to use to calculate the total prompt length. Returns Returns None if the method does not depend on the prompt length, otherwise the length of the prompt in tokens. validator raise_callback_manager_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
999ef90924dd-7
tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") validator set_verbose  »  verbose¶ Set the chain verbosity. Defaults to the global setting if not specified by the user. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
999ef90924dd-8
property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.base.BaseCombineDocumentsChain.html
e17d84d9c865-0
langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain¶ class langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_documents', output_key: str = 'output_text', llm_chain: LLMChain, reduce_documents_chain: BaseCombineDocumentsChain, document_variable_name: str, return_intermediate_steps: bool = False)[source]¶ Bases: BaseCombineDocumentsChain Combining documents by mapping a chain over them, then combining results. We first call llm_chain on each document individually, passing in the page_content and any other kwargs. This is the map step. We then process the results of that map step in a reduce step. This should likely be a ReduceDocumentsChain. Example from langchain.chains import ( StuffDocumentsChain, LLMChain, ReduceDocumentsChain, MapReduceDocumentsChain, ) from langchain.prompts import PromptTemplate from langchain.llms import OpenAI # This controls how each document will be formatted. Specifically, # it will be passed to `format_document` - see that function for more # details. document_prompt = PromptTemplate( input_variables=["page_content"], template="{page_content}" ) document_variable_name = "context" llm = OpenAI() # The prompt here should take as an input variable the # `document_variable_name` prompt = PromptTemplate.from_template( "Summarize this content: {context}" )
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html
e17d84d9c865-1
"Summarize this content: {context}" ) llm_chain = LLMChain(llm=llm, prompt=prompt) # We now define how to combine these summaries reduce_prompt = PromptTemplate.from_template( "Combine these summaries: {context}" ) reduce_llm_chain = LLMChain(llm=llm, prompt=reduce_prompt) combine_documents_chain = StuffDocumentsChain( llm_chain=reduce_llm_chain, document_prompt=document_prompt, document_variable_name=document_variable_name ) reduce_documents_chain = ReduceDocumentsChain( combine_documents_chain=combine_documents_chain, ) chain = MapReduceDocumentsChain( llm_chain=llm_chain, reduce_documents_chain=reduce_documents_chain, ) # If we wanted to, we could also pass in collapse_documents_chain # which is specifically aimed at collapsing documents BEFORE # the final call. prompt = PromptTemplate.from_template( "Collapse this content: {context}" ) llm_chain = LLMChain(llm=llm, prompt=prompt) collapse_documents_chain = StuffDocumentsChain( llm_chain=llm_chain, document_prompt=document_prompt, document_variable_name=document_variable_name ) reduce_documents_chain = ReduceDocumentsChain( combine_documents_chain=combine_documents_chain, collapse_documents_chain=collapse_documents_chain, ) chain = MapReduceDocumentsChain( llm_chain=llm_chain, reduce_documents_chain=reduce_documents_chain, ) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html
e17d84d9c865-2
Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param document_variable_name: str [Required]¶ The variable name in the llm_chain to put the documents in. If only one variable in the llm_chain, this need not be provided. param llm_chain: LLMChain [Required]¶ Chain to apply to each document individually. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param reduce_documents_chain: BaseCombineDocumentsChain [Required]¶ Chain to use to reduce the results of applying llm_chain to each doc. This typically either a ReduceDocumentChain or StuffDocumentChain. param return_intermediate_steps: bool = False¶ Return the results of the map steps in the output. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html
e17d84d9c865-3
Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html
e17d84d9c865-4
to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acombine_docs(docs: List[Document], token_max: Optional[int] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Tuple[str, dict][source]¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html
e17d84d9c865-5
Combine documents in a map reduce manner. Combine by mapping first chain over all documents, then reducing the results. This reducing can be done recursively if needed (if there are many documents). apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html
e17d84d9c865-6
directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." combine_docs(docs: List[Document], token_max: Optional[int] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Tuple[str, dict][source]¶ Combine documents in a map reduce manner. Combine by mapping first chain over all documents, then reducing the results. This reducing can be done recursively if needed (if there are many documents). dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example ..code-block:: python chain.dict(exclude_unset=True) # -> {“_type”: “foo”, “verbose”: False, …} validator get_default_document_variable_name  »  all fields[source]¶ Get default document variable name, if not provided. validator get_reduce_chain  »  all fields[source]¶ For backwards compatibility. validator get_return_intermediate_steps  »  all fields[source]¶ For backwards compatibility.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html
e17d84d9c865-7
validator get_return_intermediate_steps  »  all fields[source]¶ For backwards compatibility. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. prompt_length(docs: List[Document], **kwargs: Any) → Optional[int]¶ Return the prompt length given the documents passed in. This can be used by a caller to determine whether passing in a list of documents would exceed a certain prompt length. This useful when trying to ensure that the size of a prompt remains below a certain context limit. Parameters docs – List[Document], a list of documents to use to calculate the total prompt length. Returns Returns None if the method does not depend on the prompt length, otherwise the length of the prompt in tokens. validator raise_callback_manager_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html
e17d84d9c865-8
Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string:
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html
e17d84d9c865-9
# and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml") validator set_verbose  »  verbose¶ Set the chain verbosity. Defaults to the global setting if not specified by the user. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property collapse_document_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain¶ Kept for backward compatibility. property combine_document_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain¶ Kept for backward compatibility. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html
ab5d36fa0b18-0
langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain¶ class langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_documents', output_key: str = 'output_text', llm_chain: LLMChain, document_variable_name: str, rank_key: str, answer_key: str, metadata_keys: Optional[List[str]] = None, return_intermediate_steps: bool = False)[source]¶ Bases: BaseCombineDocumentsChain Combining documents by mapping a chain over them, then reranking results. This algorithm calls an LLMChain on each input document. The LLMChain is expected to have an OutputParser that parses the result into both an answer (answer_key) and a score (rank_key). The answer with the highest score is then returned. Example:from langchain.chains import StuffDocumentsChain, LLMChain from langchain.prompts import PromptTemplate from langchain.llms import OpenAI from langchain.output_parsers.regex import RegexParser document_variable_name = "context" llm = OpenAI() # The prompt here should take as an input variable the # `document_variable_name` # The actual prompt will need to be a lot more complex, this is just # an example. prompt_template = ( "Use the following context to tell me the chemical formula " "for water. Output both your answer and a score of how confident " "you are. Context: {content}" )
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html
ab5d36fa0b18-1
"you are. Context: {content}" ) output_parser = RegexParser( regex=r"(.*?) Score: (.*)”, output_keys=[“answer”, “score”], ) prompt = PromptTemplate( template=prompt_template, input_variables=[“context”], output_parser=output_parser, ) llm_chain = LLMChain(llm=llm, prompt=prompt) chain = MapRerankDocumentsChain( llm_chain=llm_chain, document_variable_name=document_variable_name, rank_key=”score”, answer_key=”answer”, ) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param answer_key: str [Required]¶ Key in output of llm_chain to return as answer. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param document_variable_name: str [Required]¶ The variable name in the llm_chain to put the documents in. If only one variable in the llm_chain, this need not be provided. param llm_chain: LLMChain [Required]¶ Chain to apply to each document individually. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html
ab5d36fa0b18-2
and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param metadata_keys: Optional[List[str]] = None¶ Additional metadata from the chosen document to return. param rank_key: str [Required]¶ Key in output of llm_chain to rank on. param return_intermediate_steps: bool = False¶ Return intermediate steps. Intermediate steps include the results of calling llm_chain on each document. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html
ab5d36fa0b18-3
Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html
ab5d36fa0b18-4
response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acombine_docs(docs: List[Document], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Tuple[str, dict][source]¶ Combine documents in a map rerank manner. Combine by mapping first chain over all documents, then reranking the results. Parameters docs – List of documents to combine callbacks – Callbacks to be passed through **kwargs – additional parameters to be passed to LLM calls (like other input variables besides the documents) Returns The first element returned is the single string output. The second element returned is a dictionary of other keys to return. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html
ab5d36fa0b18-5
Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string:
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html
ab5d36fa0b18-6
# and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." combine_docs(docs: List[Document], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Tuple[str, dict][source]¶ Combine documents in a map rerank manner. Combine by mapping first chain over all documents, then reranking the results. Parameters docs – List of documents to combine callbacks – Callbacks to be passed through **kwargs – additional parameters to be passed to LLM calls (like other input variables besides the documents) Returns The first element returned is the single string output. The second element returned is a dictionary of other keys to return. dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example ..code-block:: python chain.dict(exclude_unset=True) # -> {“_type”: “foo”, “verbose”: False, …} validator get_default_document_variable_name  »  all fields[source]¶ Get default document variable name, if not provided. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html
ab5d36fa0b18-7
Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. prompt_length(docs: List[Document], **kwargs: Any) → Optional[int]¶ Return the prompt length given the documents passed in. This can be used by a caller to determine whether passing in a list of documents would exceed a certain prompt length. This useful when trying to ensure that the size of a prompt remains below a certain context limit. Parameters docs – List[Document], a list of documents to use to calculate the total prompt length. Returns Returns None if the method does not depend on the prompt length, otherwise the length of the prompt in tokens. validator raise_callback_manager_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html
ab5d36fa0b18-8
has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None¶ Save the chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters file_path – Path to file to save the chain to. Example chain.save(file_path="path/chain.yaml")
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html
ab5d36fa0b18-9
Example chain.save(file_path="path/chain.yaml") validator set_verbose  »  verbose¶ Set the chain verbosity. Defaults to the global setting if not specified by the user. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_llm_output  »  all fields[source]¶ Validate that the combine chain outputs a dictionary. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html
10d3c3bc9fe7-0
langchain.chains.combine_documents.reduce.AsyncCombineDocsProtocol¶ class langchain.chains.combine_documents.reduce.AsyncCombineDocsProtocol(*args, **kwargs)[source]¶ Bases: Protocol Interface for the combine_docs method. Methods __init__(*args, **kwargs) async __call__(docs: List[Document], **kwargs: Any) → str[source]¶ Async nterface for the combine_docs method.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.reduce.AsyncCombineDocsProtocol.html
3b8d311604d9-0
langchain.chains.combine_documents.reduce.CombineDocsProtocol¶ class langchain.chains.combine_documents.reduce.CombineDocsProtocol(*args, **kwargs)[source]¶ Bases: Protocol Interface for the combine_docs method. Methods __init__(*args, **kwargs) __call__(docs: List[Document], **kwargs: Any) → str[source]¶ Interface for the combine_docs method.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.reduce.CombineDocsProtocol.html
76c812db9c23-0
langchain.chains.combine_documents.reduce.ReduceDocumentsChain¶ class langchain.chains.combine_documents.reduce.ReduceDocumentsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_documents', output_key: str = 'output_text', combine_documents_chain: BaseCombineDocumentsChain, collapse_documents_chain: Optional[BaseCombineDocumentsChain] = None, token_max: int = 3000)[source]¶ Bases: BaseCombineDocumentsChain Combining documents by recursively reducing them. This involves combine_documents_chain collapse_documents_chain combine_documents_chain is ALWAYS provided. This is final chain that is called. We pass all previous results to this chain, and the output of this chain is returned as a final result. collapse_documents_chain is used if the documents passed in are too many to all be passed to combine_documents_chain in one go. In this case, collapse_documents_chain is called recursively on as big of groups of documents as are allowed. Example from langchain.chains import ( StuffDocumentsChain, LLMChain, ReduceDocumentsChain ) from langchain.prompts import PromptTemplate from langchain.llms import OpenAI # This controls how each document will be formatted. Specifically, # it will be passed to `format_document` - see that function for more # details. document_prompt = PromptTemplate( input_variables=["page_content"], template="{page_content}" ) document_variable_name = "context" llm = OpenAI() # The prompt here should take as an input variable the # `document_variable_name`
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
76c812db9c23-1
# The prompt here should take as an input variable the # `document_variable_name` prompt = PromptTemplate.from_template( "Summarize this content: {context}" ) llm_chain = LLMChain(llm=llm, prompt=prompt) combine_documents_chain = StuffDocumentsChain( llm_chain=llm_chain, document_prompt=document_prompt, document_variable_name=document_variable_name ) chain = ReduceDocumentsChain( combine_documents_chain=combine_documents_chain, ) # If we wanted to, we could also pass in collapse_documents_chain # which is specifically aimed at collapsing documents BEFORE # the final call. prompt = PromptTemplate.from_template( "Collapse this content: {context}" ) llm_chain = LLMChain(llm=llm, prompt=prompt) collapse_documents_chain = StuffDocumentsChain( llm_chain=llm_chain, document_prompt=document_prompt, document_variable_name=document_variable_name ) chain = ReduceDocumentsChain( combine_documents_chain=combine_documents_chain, collapse_documents_chain=collapse_documents_chain, ) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param collapse_documents_chain: Optional[BaseCombineDocumentsChain] = None¶
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
76c812db9c23-2
param collapse_documents_chain: Optional[BaseCombineDocumentsChain] = None¶ Chain to use to collapse documents if needed until they can all fit. If None, will use the combine_documents_chain. This is typically a StuffDocumentsChain. param combine_documents_chain: BaseCombineDocumentsChain [Required]¶ Final chain to call to combine documents. This is typically a StuffDocumentsChain. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param token_max: int = 3000¶ The maximum number of tokens to group documents into. For example, if set to 3000 then documents will be grouped into chunks of no greater than 3000 tokens before trying to combine them into a smaller chunk. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
76c812db9c23-3
Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
76c812db9c23-4
Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acombine_docs(docs: List[Document], token_max: Optional[int] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Tuple[str, dict][source]¶ Combine multiple documents recursively. Parameters
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
76c812db9c23-5
Combine multiple documents recursively. Parameters docs – List of documents to combine, assumed that each one is less than token_max. token_max – Recursively creates groups of documents less than this number of tokens. callbacks – Callbacks to be passed through **kwargs – additional parameters to be passed to LLM calls (like other input variables besides the documents) Returns The first element returned is the single string output. The second element returned is a dictionary of other keys to return. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
76c812db9c23-6
these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." combine_docs(docs: List[Document], token_max: Optional[int] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Tuple[str, dict][source]¶ Combine multiple documents recursively. Parameters docs – List of documents to combine, assumed that each one is less than token_max. token_max – Recursively creates groups of documents less than this number of tokens. callbacks – Callbacks to be passed through **kwargs – additional parameters to be passed to LLM calls (like other input variables besides the documents) Returns The first element returned is the single string output. The second element returned is a dictionary of other keys to return. dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain.
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html