id
stringlengths 14
15
| text
stringlengths 22
2.51k
| source
stringlengths 60
153
|
---|---|---|
76c812db9c23-7
|
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
prompt_length(docs: List[Document], **kwargs: Any) → Optional[int]¶
Return the prompt length given the documents passed in.
This can be used by a caller to determine whether passing in a list
of documents would exceed a certain prompt length. This useful when
trying to ensure that the size of a prompt remains below a certain
context limit.
Parameters
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
|
76c812db9c23-8
|
trying to ensure that the size of a prompt remains below a certain
context limit.
Parameters
docs – List[Document], a list of documents to use to calculate the
total prompt length.
Returns
Returns None if the method does not depend on the prompt length,
otherwise the length of the prompt in tokens.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
|
76c812db9c23-9
|
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
|
76c812db9c23-10
|
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html
|
b03885856542-0
|
langchain.chains.combine_documents.refine.RefineDocumentsChain¶
class langchain.chains.combine_documents.refine.RefineDocumentsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_documents', output_key: str = 'output_text', initial_llm_chain: LLMChain, refine_llm_chain: LLMChain, document_variable_name: str, initial_response_name: str, document_prompt: BasePromptTemplate = None, return_intermediate_steps: bool = False)[source]¶
Bases: BaseCombineDocumentsChain
Combine documents by doing a first pass and then refining on more documents.
This algorithm first calls initial_llm_chain on the first document, passing
that first document in with the variable name document_variable_name, and
produces a new variable with the variable name initial_response_name.
Then, it loops over every remaining document. This is called the “refine” step.
It calls refine_llm_chain,
passing in that document with the variable name document_variable_name
as well as the previous response with the variable name initial_response_name.
Example
from langchain.chains import RefineDocumentsChain, LLMChain
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
# This controls how each document will be formatted. Specifically,
# it will be passed to `format_document` - see that function for more
# details.
document_prompt = PromptTemplate(
input_variables=["page_content"],
template="{page_content}"
)
document_variable_name = "context"
llm = OpenAI()
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.refine.RefineDocumentsChain.html
|
b03885856542-1
|
)
document_variable_name = "context"
llm = OpenAI()
# The prompt here should take as an input variable the
# `document_variable_name`
prompt = PromptTemplate.from_template(
"Summarize this content: {context}"
)
llm_chain = LLMChain(llm=llm, prompt=prompt)
initial_response_name = "prev_response"
# The prompt here should take as an input variable the
# `document_variable_name` as well as `initial_response_name`
prompt_refine = PromptTemplate.from_template(
"Here's your first summary: {prev_response}. "
"Now add to it based on the following context: {context}"
)
llm_chain_refine = LLMChain(llm=llm, prompt=prompt_refine)
chain = RefineDocumentsChain(
initial_llm_chain=initial_llm_chain,
refine_llm_chain=refine_llm_chain,
document_prompt=document_prompt,
document_variable_name=document_variable_name,
initial_response_name=initial_response_name,
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param document_prompt: BasePromptTemplate [Optional]¶
Prompt to use to format each document, gets passed to format_document.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.refine.RefineDocumentsChain.html
|
b03885856542-2
|
Prompt to use to format each document, gets passed to format_document.
param document_variable_name: str [Required]¶
The variable name in the initial_llm_chain to put the documents in.
If only one variable in the initial_llm_chain, this need not be provided.
param initial_llm_chain: LLMChain [Required]¶
LLM chain to use on initial document.
param initial_response_name: str [Required]¶
The variable name to format the initial response in when refining.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param refine_llm_chain: LLMChain [Required]¶
LLM chain to use when refining.
param return_intermediate_steps: bool = False¶
Return the results of the refine steps in the output.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.refine.RefineDocumentsChain.html
|
b03885856542-3
|
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.refine.RefineDocumentsChain.html
|
b03885856542-4
|
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acombine_docs(docs: List[Document], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Tuple[str, dict][source]¶
Combine by mapping first chain over all, then stuffing into final chain.
Parameters
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.refine.RefineDocumentsChain.html
|
b03885856542-5
|
Combine by mapping first chain over all, then stuffing into final chain.
Parameters
docs – List of documents to combine
callbacks – Callbacks to be passed through
**kwargs – additional parameters to be passed to LLM calls (like other
input variables besides the documents)
Returns
The first element returned is the single string output. The second
element returned is a dictionary of other keys to return.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.refine.RefineDocumentsChain.html
|
b03885856542-6
|
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
combine_docs(docs: List[Document], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Tuple[str, dict][source]¶
Combine by mapping first chain over all, then stuffing into final chain.
Parameters
docs – List of documents to combine
callbacks – Callbacks to be passed through
**kwargs – additional parameters to be passed to LLM calls (like other
input variables besides the documents)
Returns
The first element returned is the single string output. The second
element returned is a dictionary of other keys to return.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.refine.RefineDocumentsChain.html
|
b03885856542-7
|
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
validator get_default_document_variable_name » all fields[source]¶
Get default document variable name, if not provided.
validator get_return_intermediate_steps » all fields[source]¶
For backwards compatibility.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
prompt_length(docs: List[Document], **kwargs: Any) → Optional[int]¶
Return the prompt length given the documents passed in.
This can be used by a caller to determine whether passing in a list
of documents would exceed a certain prompt length. This useful when
trying to ensure that the size of a prompt remains below a certain
context limit.
Parameters
docs – List[Document], a list of documents to use to calculate the
total prompt length.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.refine.RefineDocumentsChain.html
|
b03885856542-8
|
total prompt length.
Returns
Returns None if the method does not depend on the prompt length,
otherwise the length of the prompt in tokens.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.refine.RefineDocumentsChain.html
|
b03885856542-9
|
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.refine.RefineDocumentsChain.html
|
b03885856542-10
|
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.refine.RefineDocumentsChain.html
|
29d459effeb8-0
|
langchain.chains.combine_documents.stuff.StuffDocumentsChain¶
class langchain.chains.combine_documents.stuff.StuffDocumentsChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, input_key: str = 'input_documents', output_key: str = 'output_text', llm_chain: LLMChain, document_prompt: BasePromptTemplate = None, document_variable_name: str, document_separator: str = '\n\n')[source]¶
Bases: BaseCombineDocumentsChain
Chain that combines documents by stuffing into context.
This chain takes a list of documents and first combines them into a single string.
It does this by formatting each document into a string with the document_prompt
and then joining them together with document_separator. It then adds that new
string to the inputs with the variable name set by document_variable_name.
Those inputs are then passed to the llm_chain.
Example
from langchain.chains import StuffDocumentsChain, LLMChain
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
# This controls how each document will be formatted. Specifically,
# it will be passed to `format_document` - see that function for more
# details.
document_prompt = PromptTemplate(
input_variables=["page_content"],
template="{page_content}"
)
document_variable_name = "context"
llm = OpenAI()
# The prompt here should take as an input variable the
# `document_variable_name`
prompt = PromptTemplate.from_template(
"Summarize this content: {context}"
)
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.stuff.StuffDocumentsChain.html
|
29d459effeb8-1
|
"Summarize this content: {context}"
)
llm_chain = LLMChain(llm=llm, prompt=prompt)
chain = StuffDocumentsChain(
llm_chain=llm_chain,
document_prompt=document_prompt,
document_variable_name=document_variable_name
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param document_prompt: langchain.schema.prompt_template.BasePromptTemplate [Optional]¶
Prompt to use to format each document, gets passed to format_document.
param document_separator: str = '\n\n'¶
The string with which to join the formatted documents
param document_variable_name: str [Required]¶
The variable name in the llm_chain to put the documents in.
If only one variable in the llm_chain, this need not be provided.
param llm_chain: langchain.chains.llm.LLMChain [Required]¶
LLM chain which is called with the formatted document string,
along with any other inputs.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.stuff.StuffDocumentsChain.html
|
29d459effeb8-2
|
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.stuff.StuffDocumentsChain.html
|
29d459effeb8-3
|
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.stuff.StuffDocumentsChain.html
|
29d459effeb8-4
|
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acombine_docs(docs: List[Document], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Tuple[str, dict][source]¶
Stuff all documents into one prompt and pass to LLM.
Parameters
docs – List of documents to join together into one variable
callbacks – Optional callbacks to pass along
**kwargs – additional parameters to use to get inputs to LLMChain.
Returns
The first element returned is the single string output. The second
element returned is a dictionary of other keys to return.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.stuff.StuffDocumentsChain.html
|
29d459effeb8-5
|
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
combine_docs(docs: List[Document], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Tuple[str, dict][source]¶
Stuff all documents into one prompt and pass to LLM.
Parameters
docs – List of documents to join together into one variable
callbacks – Optional callbacks to pass along
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.stuff.StuffDocumentsChain.html
|
29d459effeb8-6
|
docs – List of documents to join together into one variable
callbacks – Optional callbacks to pass along
**kwargs – additional parameters to use to get inputs to LLMChain.
Returns
The first element returned is the single string output. The second
element returned is a dictionary of other keys to return.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
validator get_default_document_variable_name » all fields[source]¶
Get default document variable name, if not provided.
If only one variable is present in the llm_chain.prompt,
we can infer that the formatted documents should be passed in
with this variable name.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.stuff.StuffDocumentsChain.html
|
29d459effeb8-7
|
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
prompt_length(docs: List[Document], **kwargs: Any) → Optional[int][source]¶
Return the prompt length given the documents passed in.
This can be used by a caller to determine whether passing in a list
of documents would exceed a certain prompt length. This useful when
trying to ensure that the size of a prompt remains below a certain
context limit.
Parameters
docs – List[Document], a list of documents to use to calculate the
total prompt length.
Returns
Returns None if the method does not depend on the prompt length,
otherwise the length of the prompt in tokens.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.stuff.StuffDocumentsChain.html
|
29d459effeb8-8
|
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.stuff.StuffDocumentsChain.html
|
29d459effeb8-9
|
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.combine_documents.stuff.StuffDocumentsChain.html
|
ced33e83551c-0
|
langchain.chains.constitutional_ai.base.ConstitutionalChain¶
class langchain.chains.constitutional_ai.base.ConstitutionalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, chain: LLMChain, constitutional_principles: List[ConstitutionalPrinciple], critique_chain: LLMChain, revision_chain: LLMChain, return_intermediate_steps: bool = False)[source]¶
Bases: Chain
Chain for applying constitutional principles.
Example
from langchain.llms import OpenAI
from langchain.chains import LLMChain, ConstitutionalChain
from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple
llm = OpenAI()
qa_prompt = PromptTemplate(
template="Q: {question} A:",
input_variables=["question"],
)
qa_chain = LLMChain(llm=llm, prompt=qa_prompt)
constitutional_chain = ConstitutionalChain.from_llm(
llm=llm,
chain=qa_chain,
constitutional_principles=[
ConstitutionalPrinciple(
critique_request="Tell if this answer is good.",
revision_request="Give a better answer.",
)
],
)
constitutional_chain.run(question="What is the meaning of life?")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-1
|
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param chain: langchain.chains.llm.LLMChain [Required]¶
param constitutional_principles: List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple] [Required]¶
param critique_chain: langchain.chains.llm.LLMChain [Required]¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param return_intermediate_steps: bool = False¶
param revision_chain: langchain.chains.llm.LLMChain [Required]¶
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-2
|
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-3
|
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-4
|
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-5
|
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-6
|
classmethod from_llm(llm: BaseLanguageModel, chain: LLMChain, critique_prompt: BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism,
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-7
|
the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I’m not sure that the precession
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-8
|
so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that is smaller and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': "Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': "Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.", 'critique': "The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-9
|
"The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.", 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique:', example_separator='\n === \n', prefix="Below is a conversation between a human and an AI model. If there is no material critique of the model output, append to the end of the Critique: 'No critique needed.' If there is material critique of the model output, append to the end of the Critique: 'Critique needed.'", template_format='f-string', validate_template=True), revision_prompt: BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique':
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-10
|
Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.',
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-11
|
model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-12
|
I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and you’re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that is smaller and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': "Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': "Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.", 'critique': "The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-13
|
"The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.", 'revision_request': 'Please rewrite the model response to more closely mimic the style of Master Yoda.', 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nIf the critique does not identify anything worth changing, ignore the Revision Request and do not make any revisions. Instead, return "No revisions needed".\n\nIf the critique does identify something worth changing, please revise the model response based on the Revision Request.\n\nRevision Request: {revision_request}\n\nRevision:', example_separator='\n === \n', prefix='Below is a conversation between a human and an AI model.', template_format='f-string', validate_template=True), **kwargs: Any) → ConstitutionalChain[source]¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-14
|
Create a chain from an LLM.
classmethod get_principles(names: Optional[List[str]] = None) → List[ConstitutionalPrinciple][source]¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-15
|
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
ced33e83551c-16
|
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property input_keys: List[str]¶
Defines the input keys.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property output_keys: List[str]¶
Defines the output keys.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.base.ConstitutionalChain.html
|
94d5517589a6-0
|
langchain.chains.constitutional_ai.models.ConstitutionalPrinciple¶
class langchain.chains.constitutional_ai.models.ConstitutionalPrinciple(*, critique_request: str, revision_request: str, name: str = 'Constitutional Principle')[source]¶
Bases: BaseModel
Class for a constitutional principle.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param critique_request: str [Required]¶
param name: str = 'Constitutional Principle'¶
param revision_request: str [Required]¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html
|
fafc6366204b-0
|
langchain.chains.conversation.base.ConversationChain¶
class langchain.chains.conversation.base.ConversationChain(*, memory: BaseMemory = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:', template_format='f-string', validate_template=True), llm: BaseLanguageModel, output_key: str = 'response', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None, input_key: str = 'input')[source]¶
Bases: LLMChain
Chain to have a conversation and load context from memory.
Example
from langchain import ConversationChain, OpenAI
conversation = ConversationChain(llm=OpenAI())
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversation.base.ConversationChain.html
|
fafc6366204b-1
|
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param llm: BaseLanguageModel [Required]¶
Language model to call.
param llm_kwargs: dict [Optional]¶
param memory: langchain.schema.memory.BaseMemory [Optional]¶
Default memory store.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param output_parser: BaseLLMOutputParser [Optional]¶
Output parser to use.
Defaults to one that takes the most likely string but does not change it
otherwise.
param prompt: langchain.schema.prompt_template.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:', template_format='f-string', validate_template=True)¶
Default conversation prompt to use.
param return_final_only: bool = True¶
Whether to return only the final parsed result. Defaults to True.
If false, will return a bunch of extra information about the generation.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversation.base.ConversationChain.html
|
fafc6366204b-2
|
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversation.base.ConversationChain.html
|
fafc6366204b-3
|
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Utilize the LLM generate method for speed gains.
async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶
Call apply and then parse the results.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversation.base.ConversationChain.html
|
fafc6366204b-4
|
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶
Generate LLM result from inputs.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Utilize the LLM generate method for speed gains.
apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶
Call apply and then parse the results.
async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶
Call apredict and then parse the results.
async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversation.base.ConversationChain.html
|
fafc6366204b-5
|
Prepare prompts from inputs.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversation.base.ConversationChain.html
|
fafc6366204b-6
|
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶
Create outputs from response.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶
Create LLMChain from LLM and template.
generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶
Generate LLM result from inputs.
predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶
Call predict and then parse the results.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversation.base.ConversationChain.html
|
fafc6366204b-7
|
Call predict and then parse the results.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶
Prepare prompts from inputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversation.base.ConversationChain.html
|
fafc6366204b-8
|
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversation.base.ConversationChain.html
|
fafc6366204b-9
|
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_prompt_input_variables » all fields[source]¶
Validate that prompt input variables are consistent.
property input_keys: List[str]¶
Use this since so some prompt vars come from history.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversation.base.ConversationChain.html
|
b606a788bf05-0
|
langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain¶
class langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, combine_docs_chain: BaseCombineDocumentsChain, question_generator: LLMChain, output_key: str = 'answer', rephrase_question: bool = True, return_source_documents: bool = False, return_generated_question: bool = False, get_chat_history: Optional[Callable[[Union[Tuple[str, str], BaseMessage]], str]] = None)[source]¶
Bases: Chain
Chain for chatting with an index.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param combine_docs_chain: BaseCombineDocumentsChain [Required]¶
The chain used to combine any retrieved documents.
param get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = None¶
An optional function to get a string of the chat history.
If None is provided, will use a default.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html
|
b606a788bf05-1
|
If None is provided, will use a default.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param output_key: str = 'answer'¶
The output key to return the final answer of this chain in.
param question_generator: LLMChain [Required]¶
The chain used to generate a new question for the sake of retrieval.
This chain will take in the current question (with variable question)
and any chat history (with variable chat_history) and will produce
a new standalone question to be used later on.
param rephrase_question: bool = True¶
Whether or not to pass the new generated question to the combine_docs_chain.
If True, will pass the new generated question along.
If False, will only use the new generated question for retrieval and pass the
original question along to the combine_docs_chain.
param return_generated_question: bool = False¶
Return the generated question as part of the final result.
param return_source_documents: bool = False¶
Return the retrieved source documents as part of the final result.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html
|
b606a788bf05-2
|
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html
|
b606a788bf05-3
|
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html
|
b606a788bf05-4
|
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html
|
b606a788bf05-5
|
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html
|
b606a788bf05-6
|
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html
|
b606a788bf05-7
|
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None[source]¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property input_keys: List[str]¶
Input keys.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
allow_population_by_field_name = True¶
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain.html
|
94bce77b0a06-0
|
langchain.chains.conversational_retrieval.base.ChatVectorDBChain¶
class langchain.chains.conversational_retrieval.base.ChatVectorDBChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, combine_docs_chain: BaseCombineDocumentsChain, question_generator: LLMChain, output_key: str = 'answer', rephrase_question: bool = True, return_source_documents: bool = False, return_generated_question: bool = False, get_chat_history: Optional[Callable[[Union[Tuple[str, str], BaseMessage]], str]] = None, vectorstore: VectorStore, top_k_docs_for_context: int = 4, search_kwargs: dict = None)[source]¶
Bases: BaseConversationalRetrievalChain
Chain for chatting with a vector database.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param combine_docs_chain: BaseCombineDocumentsChain [Required]¶
The chain used to combine any retrieved documents.
param get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = None¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
94bce77b0a06-1
|
param get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = None¶
An optional function to get a string of the chat history.
If None is provided, will use a default.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param output_key: str = 'answer'¶
The output key to return the final answer of this chain in.
param question_generator: LLMChain [Required]¶
The chain used to generate a new question for the sake of retrieval.
This chain will take in the current question (with variable question)
and any chat history (with variable chat_history) and will produce
a new standalone question to be used later on.
param rephrase_question: bool = True¶
Whether or not to pass the new generated question to the combine_docs_chain.
If True, will pass the new generated question along.
If False, will only use the new generated question for retrieval and pass the
original question along to the combine_docs_chain.
param return_generated_question: bool = False¶
Return the generated question as part of the final result.
param return_source_documents: bool = False¶
Return the retrieved source documents as part of the final result.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
94bce77b0a06-2
|
Return the retrieved source documents as part of the final result.
param search_kwargs: dict [Optional]¶
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param top_k_docs_for_context: int = 4¶
param vectorstore: VectorStore [Required]¶
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
94bce77b0a06-3
|
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
94bce77b0a06-4
|
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
94bce77b0a06-5
|
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
classmethod from_llm(llm: BaseLanguageModel, vectorstore: VectorStore, condense_question_prompt: BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), chain_type: str = 'stuff', combine_docs_chain_kwargs: Optional[Dict] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseConversationalRetrievalChain[source]¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
94bce77b0a06-6
|
Load chain from LLM.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
validator raise_deprecation » all fields[source]¶
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
94bce77b0a06-7
|
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
94bce77b0a06-8
|
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property input_keys: List[str]¶
Input keys.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
allow_population_by_field_name = True¶
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ChatVectorDBChain.html
|
5e79c7d05a4a-0
|
langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain¶
class langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, combine_docs_chain: BaseCombineDocumentsChain, question_generator: LLMChain, output_key: str = 'answer', rephrase_question: bool = True, return_source_documents: bool = False, return_generated_question: bool = False, get_chat_history: Optional[Callable[[Union[Tuple[str, str], BaseMessage]], str]] = None, retriever: BaseRetriever, max_tokens_limit: Optional[int] = None)[source]¶
Bases: BaseConversationalRetrievalChain
Chain for having a conversation based on retrieved documents.
This chain takes in chat history (a list of messages) and new questions,
and then returns an answer to that question.
The algorithm for this chain consists of three parts:
1. Use the chat history and the new question to create a “standalone question”.
This is done so that this question can be passed into the retrieval step to fetch
relevant documents. If only the new question was passed in, then relevant context
may be lacking. If the whole conversation was passed into retrieval, there may
be unnecessary information there that would distract from retrieval.
2. This new question is passed to the retriever and relevant documents are
returned.
3. The retrieved documents are passed to an LLM along with either the new question
(default behavior) or the original question and chat history to generate a final
response.
Example
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html
|
5e79c7d05a4a-1
|
(default behavior) or the original question and chat history to generate a final
response.
Example
from langchain.chains import (
StuffDocumentsChain, LLMChain, ConversationalRetrievalChain
)
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
combine_docs_chain = StuffDocumentsChain(...)
vectorstore = ...
retriever = vectorstore.as_retriever()
# This controls how the standalone question is generated.
# Should take `chat_history` and `question` as input variables.
template = (
"Combine the chat history and follow up question into "
"a standalone question. Chat History: {chat_history}"
"Follow up question: {question}"
)
prompt = PromptTemplate.from_template(template)
llm = OpenAI()
llm_chain = LLMChain(llm=llm, prompt=prompt)
chain = ConversationalRetrievalChain(
combine_docs_chain=combine_docs_chain,
retriever=retriever,
question_generator=question_generator,
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param combine_docs_chain: BaseCombineDocumentsChain [Required]¶
The chain used to combine any retrieved documents.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html
|
5e79c7d05a4a-2
|
The chain used to combine any retrieved documents.
param get_chat_history: Optional[Callable[[CHAT_TURN_TYPE], str]] = None¶
An optional function to get a string of the chat history.
If None is provided, will use a default.
param max_tokens_limit: Optional[int] = None¶
If set, enforces that the documents returned are less than this limit.
This is only enforced if combine_docs_chain is of type StuffDocumentsChain.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param output_key: str = 'answer'¶
The output key to return the final answer of this chain in.
param question_generator: LLMChain [Required]¶
The chain used to generate a new question for the sake of retrieval.
This chain will take in the current question (with variable question)
and any chat history (with variable chat_history) and will produce
a new standalone question to be used later on.
param rephrase_question: bool = True¶
Whether or not to pass the new generated question to the combine_docs_chain.
If True, will pass the new generated question along.
If False, will only use the new generated question for retrieval and pass the
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html
|
5e79c7d05a4a-3
|
If False, will only use the new generated question for retrieval and pass the
original question along to the combine_docs_chain.
param retriever: BaseRetriever [Required]¶
Retriever to use to fetch documents.
param return_generated_question: bool = False¶
Return the generated question as part of the final result.
param return_source_documents: bool = False¶
Return the retrieved source documents as part of the final result.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html
|
5e79c7d05a4a-4
|
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html
|
5e79c7d05a4a-5
|
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html
|
5e79c7d05a4a-6
|
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html
|
5e79c7d05a4a-7
|
# -> {“_type”: “foo”, “verbose”: False, …}
classmethod from_llm(llm: BaseLanguageModel, retriever: BaseRetriever, condense_question_prompt: BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), chain_type: str = 'stuff', verbose: bool = False, condense_question_llm: Optional[BaseLanguageModel] = None, combine_docs_chain_kwargs: Optional[Dict] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseConversationalRetrievalChain[source]¶
Convenience method to load chain from LLM and retriever.
This provides some logic to create the question_generator chain
as well as the combine_docs_chain.
Parameters
llm – The default language model to use at every part of this chain
(eg in both the question generation and the answering)
retriever – The retriever to use to fetch relevant documents from.
condense_question_prompt – The prompt to use to condense the chat history
and new question into a standalone question.
chain_type – The chain type to use to create the combine_docs_chain, will
be sent to load_qa_chain.
verbose – Verbosity flag for logging to stdout.
condense_question_llm – The language model to use for condensing the chat
history and new question into a standalone question. If none is
provided, will default to llm.
combine_docs_chain_kwargs – Parameters to pass as kwargs to load_qa_chain
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html
|
5e79c7d05a4a-8
|
combine_docs_chain_kwargs – Parameters to pass as kwargs to load_qa_chain
when constructing the combine_docs_chain.
callbacks – Callbacks to pass to all subchains.
**kwargs – Additional parameters to pass when initializing
ConversationalRetrievalChain
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html
|
5e79c7d05a4a-9
|
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html
|
5e79c7d05a4a-10
|
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property input_keys: List[str]¶
Input keys.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
allow_population_by_field_name = True¶
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html
|
8d357ffbe761-0
|
langchain.chains.flare.base.FlareChain¶
class langchain.chains.flare.base.FlareChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, question_generator_chain: QuestionGeneratorChain, response_chain: _ResponseChain = None, output_parser: FinishedOutputParser = None, retriever: BaseRetriever, min_prob: float = 0.2, min_token_gap: int = 5, num_pad_tokens: int = 2, max_iter: int = 10, start_with_retrieval: bool = True)[source]¶
Bases: Chain
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param max_iter: int = 10¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.FlareChain.html
|
8d357ffbe761-1
|
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param min_prob: float = 0.2¶
param min_token_gap: int = 5¶
param num_pad_tokens: int = 2¶
param output_parser: FinishedOutputParser [Optional]¶
param question_generator_chain: QuestionGeneratorChain [Required]¶
param response_chain: _ResponseChain [Optional]¶
param retriever: BaseRetriever [Required]¶
param start_with_retrieval: bool = True¶
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.FlareChain.html
|
8d357ffbe761-2
|
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.FlareChain.html
|
8d357ffbe761-3
|
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.FlareChain.html
|
8d357ffbe761-4
|
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.FlareChain.html
|
8d357ffbe761-5
|
# -> {“_type”: “foo”, “verbose”: False, …}
classmethod from_llm(llm: BaseLanguageModel, max_generation_len: int = 32, **kwargs: Any) → FlareChain[source]¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.FlareChain.html
|
8d357ffbe761-6
|
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.FlareChain.html
|
8d357ffbe761-7
|
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property input_keys: List[str]¶
Return the keys expected to be in the chain input.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property output_keys: List[str]¶
Return the keys expected to be in the chain output.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.FlareChain.html
|
37d8a70d9cac-0
|
langchain.chains.flare.base.QuestionGeneratorChain¶
class langchain.chains.flare.base.QuestionGeneratorChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, prompt: BasePromptTemplate = PromptTemplate(input_variables=['user_input', 'current_response', 'uncertain_span'], output_parser=None, partial_variables={}, template='Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n\n>>> USER INPUT: {user_input}\n>>> EXISTING PARTIAL RESPONSE: {current_response}\n\nThe question to which the answer is the term/entity/phrase "{uncertain_span}" is:', template_format='f-string', validate_template=True), llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]¶
Bases: LLMChain
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param llm: BaseLanguageModel [Required]¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.QuestionGeneratorChain.html
|
37d8a70d9cac-1
|
for full details.
param llm: BaseLanguageModel [Required]¶
Language model to call.
param llm_kwargs: dict [Optional]¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param output_parser: BaseLLMOutputParser [Optional]¶
Output parser to use.
Defaults to one that takes the most likely string but does not change it
otherwise.
param prompt: BasePromptTemplate = PromptTemplate(input_variables=['user_input', 'current_response', 'uncertain_span'], output_parser=None, partial_variables={}, template='Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:\n\n>>> USER INPUT: {user_input}\n>>> EXISTING PARTIAL RESPONSE: {current_response}\n\nThe question to which the answer is the term/entity/phrase "{uncertain_span}" is:', template_format='f-string', validate_template=True)¶
Prompt object to use.
param return_final_only: bool = True¶
Whether to return only the final parsed result. Defaults to True.
If false, will return a bunch of extra information about the generation.
param tags: Optional[List[str]] = None¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.QuestionGeneratorChain.html
|
37d8a70d9cac-2
|
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.QuestionGeneratorChain.html
|
37d8a70d9cac-3
|
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Utilize the LLM generate method for speed gains.
async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶
Call apply and then parse the results.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.QuestionGeneratorChain.html
|
37d8a70d9cac-4
|
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶
Generate LLM result from inputs.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Utilize the LLM generate method for speed gains.
apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶
Call apply and then parse the results.
async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶
Call apredict and then parse the results.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.QuestionGeneratorChain.html
|
37d8a70d9cac-5
|
Call apredict and then parse the results.
async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶
Prepare prompts from inputs.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.QuestionGeneratorChain.html
|
37d8a70d9cac-6
|
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶
Create outputs from response.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
..code-block:: python
chain.dict(exclude_unset=True)
# -> {“_type”: “foo”, “verbose”: False, …}
classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶
Create LLMChain from LLM and template.
generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶
Generate LLM result from inputs.
predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.QuestionGeneratorChain.html
|
37d8a70d9cac-7
|
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶
Call predict and then parse the results.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶
Prepare prompts from inputs.
validator raise_callback_manager_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.QuestionGeneratorChain.html
|
37d8a70d9cac-8
|
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Convenience method for executing chain when there’s a single string output.
The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain
has more outputs, a non-string output, or you want to return the inputs/run
info along with the outputs, use Chain.__call__.
The other difference is that this method expects inputs to be passed directly in
as positional arguments or keyword arguments, whereas Chain.__call__ expects
a single input dictionary with all the inputs.
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output as a string.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.QuestionGeneratorChain.html
|
37d8a70d9cac-9
|
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
validator set_verbose » verbose¶
Set the chain verbosity.
Defaults to the global setting if not specified by the user.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.base.QuestionGeneratorChain.html
|
a2185aef9cca-0
|
langchain.chains.flare.prompts.FinishedOutputParser¶
class langchain.chains.flare.prompts.FinishedOutputParser(*, finished_value: str = 'FINISHED')[source]¶
Bases: BaseOutputParser[Tuple[str, bool]]
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param finished_value: str = 'FINISHED'¶
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
get_format_instructions() → str¶
Instructions on how the LLM output should be formatted.
parse(text: str) → Tuple[str, bool][source]¶
Parse a single string model output into some structure.
Parameters
text – String output of language model.
Returns
Structured output.
parse_result(result: List[Generation]) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of language model.
prompt – Input PromptValue.
Returns
Structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
|
rtdocs\api.python.langchain.com\en\latest\chains\langchain.chains.flare.prompts.FinishedOutputParser.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.