Dataset Viewer
Auto-converted to Parquet
id
stringlengths
14
15
text
stringlengths
22
2.51k
source
stringlengths
60
153
493969d3cea8-0
API Reference¶ langchain.agents: Agents¶ Interface for agents. Classes¶ agents.agent.Agent Class responsible for calling the language model and deciding the action. agents.agent.AgentExecutor Consists of an agent using tools. agents.agent.AgentOutputParser Create a new model by parsing and validating input data from keyword arguments. agents.agent.BaseMultiActionAgent Base Agent class. agents.agent.BaseSingleActionAgent Base Agent class. agents.agent.ExceptionTool Create a new model by parsing and validating input data from keyword arguments. agents.agent.LLMSingleActionAgent Create a new model by parsing and validating input data from keyword arguments. agents.agent_toolkits.azure_cognitive_services.toolkit.AzureCognitiveServicesToolkit Toolkit for Azure Cognitive Services. agents.agent_toolkits.base.BaseToolkit Class representing a collection of related tools. agents.agent_toolkits.file_management.toolkit.FileManagementToolkit Toolkit for interacting with a Local Files. agents.agent_toolkits.gmail.toolkit.GmailToolkit Toolkit for interacting with Gmail. agents.agent_toolkits.jira.toolkit.JiraToolkit Jira Toolkit. agents.agent_toolkits.json.toolkit.JsonToolkit Toolkit for interacting with a JSON spec. agents.agent_toolkits.nla.tool.NLATool Natural Language API Tool. agents.agent_toolkits.nla.toolkit.NLAToolkit Natural Language API Toolkit Definition. agents.agent_toolkits.office365.toolkit.O365Toolkit Toolkit for interacting with Office365. agents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing Create a new model by parsing and validating input data from keyword arguments. agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing Create a new model by parsing and validating input data from keyword arguments. agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing Create a new model by parsing and validating input data from keyword arguments.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-1
Create a new model by parsing and validating input data from keyword arguments. agents.agent_toolkits.openapi.planner.RequestsPostToolWithParsing Create a new model by parsing and validating input data from keyword arguments. agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit Toolkit for interacting with an OpenAPI API. agents.agent_toolkits.openapi.toolkit.RequestsToolkit Toolkit for making requests. agents.agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit Toolkit for web browser tools. agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit Toolkit for interacting with PowerBI dataset. agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit Toolkit for interacting with Spark SQL. agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit Toolkit for interacting with SQL databases. agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo Information about a vectorstore. agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit Toolkit for routing between vector stores. agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit Toolkit for interacting with a vector store. agents.agent_toolkits.zapier.toolkit.ZapierToolkit Zapier Toolkit. agents.agent_types.AgentType(value[, names, ...]) Enumerator with the Agent types. agents.chat.base.ChatAgent Create a new model by parsing and validating input data from keyword arguments. agents.chat.output_parser.ChatOutputParser Create a new model by parsing and validating input data from keyword arguments. agents.conversational.base.ConversationalAgent An agent designed to hold a conversation in addition to using tools. agents.conversational.output_parser.ConvoOutputParser Create a new model by parsing and validating input data from keyword arguments. agents.conversational_chat.base.ConversationalChatAgent An agent designed to hold a conversation in addition to using tools.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-2
An agent designed to hold a conversation in addition to using tools. agents.conversational_chat.output_parser.ConvoOutputParser Create a new model by parsing and validating input data from keyword arguments. agents.mrkl.base.ChainConfig(action_name, ...) Configuration for chain to use in MRKL system. agents.mrkl.base.MRKLChain Chain that implements the MRKL system. agents.mrkl.base.ZeroShotAgent Agent for the MRKL chain. agents.mrkl.output_parser.MRKLOutputParser Create a new model by parsing and validating input data from keyword arguments. agents.openai_functions_agent.base.OpenAIFunctionsAgent An Agent driven by OpenAIs function powered API. agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent An Agent driven by OpenAIs function powered API. agents.react.base.ReActChain Chain that implements the ReAct paper. agents.react.base.ReActDocstoreAgent Agent for the ReAct chain. agents.react.base.ReActTextWorldAgent Agent for the ReAct TextWorld chain. agents.react.output_parser.ReActOutputParser Create a new model by parsing and validating input data from keyword arguments. agents.schema.AgentScratchPadChatPromptTemplate Create a new model by parsing and validating input data from keyword arguments. agents.self_ask_with_search.base.SelfAskWithSearchAgent Agent for the self-ask-with-search paper. agents.self_ask_with_search.base.SelfAskWithSearchChain Chain that does self-ask with search. agents.self_ask_with_search.output_parser.SelfAskOutputParser Create a new model by parsing and validating input data from keyword arguments. agents.structured_chat.base.StructuredChatAgent Create a new model by parsing and validating input data from keyword arguments. agents.structured_chat.output_parser.StructuredChatOutputParser
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-3
agents.structured_chat.output_parser.StructuredChatOutputParser Create a new model by parsing and validating input data from keyword arguments. agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries Create a new model by parsing and validating input data from keyword arguments. agents.tools.InvalidTool Tool that is run when invalid tool name is encountered by agent. Functions¶ agents.agent_toolkits.csv.base.create_csv_agent(...) Create csv agent by loading to a dataframe and using pandas agent. agents.agent_toolkits.json.base.create_json_agent(...) Construct a json agent from an LLM and tools. agents.agent_toolkits.openapi.base.create_openapi_agent(...) Construct a json agent from an LLM and tools. agents.agent_toolkits.openapi.planner.create_openapi_agent(...) Instantiate API planner and controller for a given spec. agents.agent_toolkits.openapi.spec.dereference_refs(...) Try to substitute $refs. agents.agent_toolkits.openapi.spec.reduce_openapi_spec(spec) Simplify/distill/minify a spec somehow. agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent(llm, df) Construct a pandas agent from an LLM and dataframe. agents.agent_toolkits.powerbi.base.create_pbi_agent(llm) Construct a pbi agent from an LLM and tools. agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent(llm) Construct a Power BI agent from a Chat LLM and tools. agents.agent_toolkits.python.base.create_python_agent(...) Construct a python agent from an LLM and tool. agents.agent_toolkits.spark.base.create_spark_dataframe_agent(llm, df) Construct a spark agent from an LLM and dataframe. agents.agent_toolkits.spark_sql.base.create_spark_sql_agent(...) Construct a sql agent from an LLM and tools. agents.agent_toolkits.sql.base.create_sql_agent(...)
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-4
agents.agent_toolkits.sql.base.create_sql_agent(...) Construct a sql agent from an LLM and tools. agents.agent_toolkits.vectorstore.base.create_vectorstore_agent(...) Construct a vectorstore agent from an LLM and tools. agents.agent_toolkits.vectorstore.base.create_vectorstore_router_agent(...) Construct a vectorstore router agent from an LLM and tools. agents.initialize.initialize_agent(tools, llm) Load an agent executor given tools and LLM. agents.load_tools.get_all_tool_names() Get a list of all possible tool names. agents.load_tools.load_huggingface_tool(...) Loads a tool from the HuggingFace Hub. agents.load_tools.load_tools(tool_names[, ...]) Load tools based on their name. agents.loading.load_agent(path, **kwargs) Unified method for loading a agent from LangChainHub or local fs. agents.loading.load_agent_from_config(config) Load agent from Config Dict. agents.utils.validate_tools_single_input(...) Validate tools for single input. langchain.base_language: Base Language¶ Classes¶ base_language.BaseLanguageModel Base class for all language models. langchain.cache: Cache¶ Beta Feature: base interface for cache. Classes¶ cache.BaseCache() Base interface for cache. cache.FullLLMCache(**kwargs) SQLite table for full LLM Cache (all generations). cache.GPTCache([init_func]) Cache that uses GPTCache as a backend. cache.InMemoryCache() Cache that stores things in memory. cache.MomentoCache(cache_client, cache_name, *) Cache that uses Momento as a backend. cache.RedisCache(redis_) Cache that uses Redis as a backend. cache.RedisSemanticCache(redis_url, embedding) Cache that uses Redis as a vector-store backend.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-5
Cache that uses Redis as a vector-store backend. cache.SQLAlchemyCache(engine, cache_schema) Cache that uses SQAlchemy as a backend. cache.SQLiteCache([database_path]) Cache that uses SQLite as a backend. langchain.callbacks: Callbacks¶ Callback handlers that allow listening to events in LangChain. Classes¶ callbacks.aim_callback.AimCallbackHandler([...]) Callback Handler that logs to Aim. callbacks.argilla_callback.ArgillaCallbackHandler(...) Callback Handler that logs into Argilla. callbacks.arize_callback.ArizeCallbackHandler([...]) Callback Handler that logs to Arize. callbacks.arthur_callback.ArthurCallbackHandler(...) Callback Handler that logs to Arthur platform. callbacks.base.AsyncCallbackHandler() Async callback handler that can be used to handle callbacks from langchain. callbacks.base.BaseCallbackHandler() Base callback handler that can be used to handle callbacks from langchain. callbacks.base.BaseCallbackManager(handlers) Base callback manager that can be used to handle callbacks from LangChain. callbacks.clearml_callback.ClearMLCallbackHandler([...]) Callback Handler that logs to ClearML. callbacks.comet_ml_callback.CometCallbackHandler([...]) Callback Handler that logs to Comet. callbacks.file.FileCallbackHandler(filename) Callback Handler that writes to a file. callbacks.flyte_callback.FlyteCallbackHandler() This callback handler is designed specifically for usage within a Flyte task. callbacks.human.HumanApprovalCallbackHandler(...) Callback for manually validating values. callbacks.human.HumanRejectedException Exception to raise when a person manually review and rejects a value. callbacks.infino_callback.InfinoCallbackHandler([...]) Callback Handler that logs to Infino. callbacks.manager.AsyncCallbackManager(handlers) Async callback manager that can be used to handle callbacks from LangChain. callbacks.manager.AsyncCallbackManagerForChainRun(*, ...)
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-6
callbacks.manager.AsyncCallbackManagerForChainRun(*, ...) Async callback manager for chain run. callbacks.manager.AsyncCallbackManagerForLLMRun(*, ...) Async callback manager for LLM run. callbacks.manager.AsyncCallbackManagerForRetrieverRun(*, ...) Async callback manager for retriever run. callbacks.manager.AsyncCallbackManagerForToolRun(*, ...) Async callback manager for tool run. callbacks.manager.AsyncParentRunManager(*, ...) Async Parent Run Manager. callbacks.manager.AsyncRunManager(*, run_id, ...) Async Run Manager. callbacks.manager.BaseRunManager(*, run_id, ...) Base class for run manager (a bound callback manager). callbacks.manager.CallbackManager(handlers) Callback manager that can be used to handle callbacks from langchain. callbacks.manager.CallbackManagerForChainRun(*, ...) Callback manager for chain run. callbacks.manager.CallbackManagerForLLMRun(*, ...) Callback manager for LLM run. callbacks.manager.CallbackManagerForRetrieverRun(*, ...) Callback manager for retriever run. callbacks.manager.CallbackManagerForToolRun(*, ...) Callback manager for tool run. callbacks.manager.ParentRunManager(*, ...[, ...]) Sync Parent Run Manager. callbacks.manager.RunManager(*, run_id, ...) Sync Run Manager. callbacks.mlflow_callback.MlflowCallbackHandler([...]) Callback Handler that logs metrics and artifacts to mlflow server. callbacks.openai_info.OpenAICallbackHandler() Callback Handler that tracks OpenAI info. callbacks.promptlayer_callback.PromptLayerCallbackHandler([...]) Callback handler for promptlayer. callbacks.stdout.StdOutCallbackHandler([color]) Callback Handler that prints to std out. callbacks.streaming_aiter.AsyncIteratorCallbackHandler() Callback handler that returns an async iterator.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-7
callbacks.streaming_aiter.AsyncIteratorCallbackHandler() Callback handler that returns an async iterator. callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler(*) Callback handler that returns an async iterator. callbacks.streaming_stdout.StreamingStdOutCallbackHandler() Callback handler for streaming. callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler(*) Callback handler for streaming in agents. callbacks.streamlit.mutable_expander.ChildRecord(...) The child record as a NamedTuple. callbacks.streamlit.mutable_expander.ChildType(value) The enumerator of the child type. callbacks.streamlit.streamlit_callback_handler.LLMThoughtState(value) Enumerator of the LLMThought state. callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler(...) A callback handler that writes to a Streamlit app. callbacks.streamlit.streamlit_callback_handler.ToolRecord(...) The tool record as a NamedTuple. callbacks.tracers.base.BaseTracer(**kwargs) Base interface for tracers. callbacks.tracers.base.TracerException Base class for exceptions in tracers module. callbacks.tracers.evaluation.EvaluatorCallbackHandler(...) A tracer that runs a run evaluator whenever a run is persisted. callbacks.tracers.langchain.LangChainTracer([...]) An implementation of the SharedTracer that POSTS to the langchain endpoint. callbacks.tracers.langchain_v1.LangChainTracerV1(...) An implementation of the SharedTracer that POSTS to the langchain endpoint. callbacks.tracers.run_collector.RunCollectorCallbackHandler([...]) A tracer that collects all nested runs in a list. callbacks.tracers.schemas.BaseRun Base class for Run. callbacks.tracers.schemas.ChainRun Class for ChainRun. callbacks.tracers.schemas.LLMRun Class for LLMRun. callbacks.tracers.schemas.Run Run schema for the V2 API in the Tracer.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-8
callbacks.tracers.schemas.Run Run schema for the V2 API in the Tracer. callbacks.tracers.schemas.ToolRun Class for ToolRun. callbacks.tracers.schemas.TracerSession TracerSessionV1 schema for the V2 API. callbacks.tracers.schemas.TracerSessionBase A creation class for TracerSession. callbacks.tracers.schemas.TracerSessionV1 TracerSessionV1 schema. callbacks.tracers.schemas.TracerSessionV1Base Base class for TracerSessionV1. callbacks.tracers.schemas.TracerSessionV1Create Create class for TracerSessionV1. callbacks.tracers.stdout.ConsoleCallbackHandler(...) Tracer that prints to the console. callbacks.tracers.wandb.WandbRunArgs Arguments for the WandbTracer. callbacks.tracers.wandb.WandbTracer([run_args]) Callback Handler that logs to Weights and Biases. callbacks.wandb_callback.WandbCallbackHandler([...]) Callback Handler that logs to Weights and Biases. callbacks.whylabs_callback.WhyLabsCallbackHandler(logger) Callback Handler for logging to WhyLabs. Functions¶ callbacks.aim_callback.import_aim() Import the aim python package and raise an error if it is not installed. callbacks.clearml_callback.import_clearml() Import the clearml python package and raise an error if it is not installed. callbacks.comet_ml_callback.import_comet_ml() Import comet_ml and raise an error if it is not installed. callbacks.flyte_callback.analyze_text(text) Analyze text using textstat and spacy. callbacks.flyte_callback.import_flytekit() Import flytekit and flytekitplugins-deck-standard. callbacks.infino_callback.import_infino() Import the infino client.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-9
callbacks.infino_callback.import_infino() Import the infino client. callbacks.manager.env_var_is_set(env_var) Check if an environment variable is set. callbacks.manager.get_openai_callback() Get the OpenAI callback handler in a context manager. callbacks.manager.trace_as_chain_group(...) Get a callback manager for a chain group in a context manager. callbacks.manager.tracing_enabled([session_name]) Get the Deprecated LangChainTracer in a context manager. callbacks.manager.tracing_v2_enabled([...]) Instruct LangChain to log all runs in context to LangSmith. callbacks.manager.wandb_tracing_enabled([...]) Get the WandbTracer in a context manager. callbacks.mlflow_callback.analyze_text(text) Analyze text using textstat and spacy. callbacks.mlflow_callback.construct_html_from_prompt_and_generation(...) Construct an html element from a prompt and a generation. callbacks.mlflow_callback.import_mlflow() Import the mlflow python package and raise an error if it is not installed. callbacks.openai_info.get_openai_token_cost_for_model(...) Get the cost in USD for a given model and number of tokens. callbacks.openai_info.standardize_model_name(...) Standardize the model name to a format that can be used in the OpenAI API. :param model_name: Model name to standardize. :param is_completion: Whether the model is used for completion or not. Defaults to False. callbacks.streamlit.__init__.StreamlitCallbackHandler(...) Construct a new StreamlitCallbackHandler. callbacks.tracers.langchain.log_error_once(...) Log an error once. callbacks.tracers.langchain.wait_for_all_tracers() Wait for all tracers to finish. callbacks.tracers.langchain_v1.get_headers() Get the headers for the LangChain API. callbacks.tracers.stdout.elapsed(run)
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-10
Get the headers for the LangChain API. callbacks.tracers.stdout.elapsed(run) Get the elapsed time of a run. callbacks.tracers.stdout.try_json_stringify(...) Try to stringify an object to JSON. callbacks.utils.flatten_dict(nested_dict[, ...]) Flattens a nested dictionary into a flat dictionary. callbacks.utils.hash_string(s) Hash a string using sha1. callbacks.utils.import_pandas() Import the pandas python package and raise an error if it is not installed. callbacks.utils.import_spacy() Import the spacy python package and raise an error if it is not installed. callbacks.utils.import_textstat() Import the textstat python package and raise an error if it is not installed. callbacks.utils.load_json(json_path) Load json file to a string. callbacks.wandb_callback.analyze_text(text) Analyze text using textstat and spacy. callbacks.wandb_callback.construct_html_from_prompt_and_generation(...) Construct an html element from a prompt and a generation. callbacks.wandb_callback.import_wandb() Import the wandb python package and raise an error if it is not installed. callbacks.wandb_callback.load_json_to_dict(...) Load json file to a dictionary. callbacks.whylabs_callback.import_langkit([...]) Import the langkit python package and raise an error if it is not installed. langchain.chains: Chains¶ Chains are easily reusable components which can be linked together. Classes¶ chains.api.base.APIChain Chain that makes API calls and summarizes the responses to answer a question. chains.api.openapi.chain.OpenAPIEndpointChain Chain interacts with an OpenAPI endpoint using natural language. chains.api.openapi.requests_chain.APIRequesterChain Get the request parser. chains.api.openapi.requests_chain.APIRequesterOutputParser Parse the request and error tags.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-11
chains.api.openapi.requests_chain.APIRequesterOutputParser Parse the request and error tags. chains.api.openapi.response_chain.APIResponderChain Get the response parser. chains.api.openapi.response_chain.APIResponderOutputParser Parse the response and error tags. chains.base.Chain Abstract base class for creating structured sequences of calls to components. chains.combine_documents.base.AnalyzeDocumentChain Chain that splits documents, then analyzes it in pieces. chains.combine_documents.base.BaseCombineDocumentsChain Base interface for chains combining documents. chains.combine_documents.map_reduce.MapReduceDocumentsChain Combining documents by mapping a chain over them, then combining results. chains.combine_documents.map_rerank.MapRerankDocumentsChain Combining documents by mapping a chain over them, then reranking results. chains.combine_documents.reduce.AsyncCombineDocsProtocol(...) Interface for the combine_docs method. chains.combine_documents.reduce.CombineDocsProtocol(...) Interface for the combine_docs method. chains.combine_documents.reduce.ReduceDocumentsChain Combining documents by recursively reducing them. chains.combine_documents.refine.RefineDocumentsChain Combine documents by doing a first pass and then refining on more documents. chains.combine_documents.stuff.StuffDocumentsChain Chain that combines documents by stuffing into context. chains.constitutional_ai.base.ConstitutionalChain Chain for applying constitutional principles. chains.constitutional_ai.models.ConstitutionalPrinciple Class for a constitutional principle. chains.conversation.base.ConversationChain Chain to have a conversation and load context from memory. chains.conversational_retrieval.base.BaseConversationalRetrievalChain Chain for chatting with an index. chains.conversational_retrieval.base.ChatVectorDBChain Chain for chatting with a vector database. chains.conversational_retrieval.base.ConversationalRetrievalChain Chain for having a conversation based on retrieved documents. chains.flare.base.FlareChain
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-12
Chain for having a conversation based on retrieved documents. chains.flare.base.FlareChain Create a new model by parsing and validating input data from keyword arguments. chains.flare.base.QuestionGeneratorChain Create a new model by parsing and validating input data from keyword arguments. chains.flare.prompts.FinishedOutputParser Create a new model by parsing and validating input data from keyword arguments. chains.graph_qa.base.GraphQAChain Chain for question-answering against a graph. chains.graph_qa.cypher.GraphCypherQAChain Chain for question-answering against a graph by generating Cypher statements. chains.graph_qa.hugegraph.HugeGraphQAChain Chain for question-answering against a graph by generating gremlin statements. chains.graph_qa.kuzu.KuzuQAChain Chain for question-answering against a graph by generating Cypher statements for Kùzu. chains.graph_qa.nebulagraph.NebulaGraphQAChain Chain for question-answering against a graph by generating nGQL statements. chains.graph_qa.sparql.GraphSparqlQAChain Chain for question-answering against an RDF or OWL graph by generating SPARQL statements. chains.hyde.base.HypotheticalDocumentEmbedder Generate hypothetical document for query, and then embed that. chains.llm.LLMChain Chain to run queries against LLMs. chains.llm_bash.base.LLMBashChain Chain that interprets a prompt and executes bash code to perform bash operations. chains.llm_bash.prompt.BashOutputParser Parser for bash output. chains.llm_checker.base.LLMCheckerChain Chain for question-answering with self-verification. chains.llm_math.base.LLMMathChain Chain that interprets a prompt and executes python code to do math.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-13
Chain that interprets a prompt and executes python code to do math. chains.llm_requests.LLMRequestsChain Chain that hits a URL and then uses an LLM to parse results. chains.llm_summarization_checker.base.LLMSummarizationCheckerChain Chain for question-answering with self-verification. chains.mapreduce.MapReduceChain Map-reduce chain. chains.moderation.OpenAIModerationChain Pass input through a moderation endpoint. chains.natbot.base.NatBotChain Implement an LLM driven browser. chains.natbot.crawler.ElementInViewPort A typed dictionary containing information about elements in the viewport. chains.openai_functions.citation_fuzzy_match.FactWithEvidence Class representing single statement. chains.openai_functions.citation_fuzzy_match.QuestionAnswer A question and its answer as a list of facts each one should have a source. chains.openai_functions.openapi.SimpleRequestChain Create a new model by parsing and validating input data from keyword arguments. chains.openai_functions.qa_with_structure.AnswerWithSources An answer to the question being asked, with sources. chains.pal.base.PALChain Implements Program-Aided Language Models. chains.prompt_selector.BasePromptSelector Create a new model by parsing and validating input data from keyword arguments. chains.prompt_selector.ConditionalPromptSelector Prompt collection that goes through conditionals. chains.qa_generation.base.QAGenerationChain Create a new model by parsing and validating input data from keyword arguments. chains.qa_with_sources.base.BaseQAWithSourcesChain Question answering with sources over documents. chains.qa_with_sources.base.QAWithSourcesChain Question answering with sources over documents. chains.qa_with_sources.loading.LoadingCallable(...) Interface for loading the combine documents chain. chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-14
chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain Question-answering with sources over an index. chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain Question-answering with sources over a vector database. chains.query_constructor.base.StructuredQueryOutputParser Create a new model by parsing and validating input data from keyword arguments. chains.query_constructor.ir.Comparator(value) Enumerator of the comparison operators. chains.query_constructor.ir.Comparison A comparison to a value. chains.query_constructor.ir.Expr Create a new model by parsing and validating input data from keyword arguments. chains.query_constructor.ir.FilterDirective A filtering expression. chains.query_constructor.ir.Operation A logical operation over other directives. chains.query_constructor.ir.Operator(value) Enumerator of the operations. chains.query_constructor.ir.StructuredQuery Create a new model by parsing and validating input data from keyword arguments. chains.query_constructor.ir.Visitor() Defines interface for IR translation using visitor pattern. chains.query_constructor.parser.QueryTransformer chains.query_constructor.schema.AttributeInfo Information about a data source attribute. chains.question_answering.__init__.LoadingCallable(...) Interface for loading the combine documents chain. chains.retrieval_qa.base.BaseRetrievalQA Create a new model by parsing and validating input data from keyword arguments. chains.retrieval_qa.base.RetrievalQA Chain for question-answering against an index. chains.retrieval_qa.base.VectorDBQA Chain for question-answering against a vector database. chains.router.base.MultiRouteChain Use a single chain to route an input to one of multiple candidate chains. chains.router.base.Route(destination, ...) Create new instance of Route(destination, next_inputs) chains.router.base.RouterChain Chain that outputs the name of a destination chain and the inputs to it.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-15
Chain that outputs the name of a destination chain and the inputs to it. chains.router.embedding_router.EmbeddingRouterChain Class that uses embeddings to route between options. chains.router.llm_router.LLMRouterChain A router chain that uses an LLM chain to perform routing. chains.router.llm_router.RouterOutputParser Parser for output of router chain int he multi-prompt chain. chains.router.multi_prompt.MultiPromptChain A multi-route chain that uses an LLM router chain to choose amongst prompts. chains.router.multi_retrieval_qa.MultiRetrievalQAChain A multi-route chain that uses an LLM router chain to choose amongst retrieval qa chains. chains.sequential.SequentialChain Chain where the outputs of one chain feed directly into next. chains.sequential.SimpleSequentialChain Simple chain where the outputs of one step feed directly into next. chains.sql_database.base.SQLDatabaseChain Chain for interacting with SQL Database. chains.sql_database.base.SQLDatabaseSequentialChain Chain for querying SQL database that is a sequential chain. chains.summarize.__init__.LoadingCallable(...) Interface for loading the combine documents chain. chains.transform.TransformChain Chain transform chain output. Functions¶ chains.graph_qa.cypher.extract_cypher(text) Extract Cypher code from a text. chains.loading.load_chain(path, **kwargs) Unified method for loading a chain from LangChainHub or local fs. chains.loading.load_chain_from_config(...) Load chain from Config Dict. chains.openai_functions.base.convert_python_function_to_openai_function(...) Convert a Python function to an OpenAI function-calling API compatible dict. chains.openai_functions.base.convert_to_openai_function(...) Convert a raw function/class to an OpenAI function. chains.openai_functions.base.create_openai_fn_chain(...) Create an LLM chain that uses OpenAI functions.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-16
Create an LLM chain that uses OpenAI functions. chains.openai_functions.base.create_structured_output_chain(...) Create an LLMChain that uses an OpenAI function to get a structured output. chains.openai_functions.citation_fuzzy_match.create_citation_fuzzy_match_chain(llm) Create a citation fuzzy match chain. chains.openai_functions.extraction.create_extraction_chain(...) Creates a chain that extracts information from a passage. chains.openai_functions.extraction.create_extraction_chain_pydantic(...) Creates a chain that extracts information from a passage using pydantic schema. chains.openai_functions.openapi.get_openapi_chain(spec) Create a chain for querying an API from a OpenAPI spec. chains.openai_functions.openapi.openapi_spec_to_openai_fn(spec) Convert a valid OpenAPI spec to the JSON Schema format expected for OpenAI chains.openai_functions.qa_with_structure.create_qa_with_sources_chain(...) Create a question answering chain that returns an answer with sources. chains.openai_functions.qa_with_structure.create_qa_with_structure_chain(...) Create a question answering chain that returns an answer with sources. chains.openai_functions.tagging.create_tagging_chain(...) Creates a chain that extracts information from a passage. chains.openai_functions.tagging.create_tagging_chain_pydantic(...) Creates a chain that extracts information from a passage. chains.openai_functions.utils.get_llm_kwargs(...) Returns the kwargs for the LLMChain constructor. chains.prompt_selector.is_chat_model(llm) Check if the language model is a chat model. chains.prompt_selector.is_llm(llm) Check if the language model is a LLM. chains.qa_with_sources.loading.load_qa_with_sources_chain(llm) Load question answering with sources chain. chains.query_constructor.base.load_query_constructor_chain(...) Load a query constructor chain. chains.query_constructor.parser.get_parser([...])
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-17
Load a query constructor chain. chains.query_constructor.parser.get_parser([...]) Returns a parser for the query language. chains.question_answering.__init__.load_qa_chain(llm) Load question answering chain. chains.summarize.__init__.load_summarize_chain(llm) Load summarizing chain. langchain.chat_models: Chat Models¶ Classes¶ chat_models.anthropic.ChatAnthropic Wrapper around Anthropic's large language model. chat_models.azure_openai.AzureChatOpenAI Wrapper around Azure OpenAI Chat Completion API. chat_models.base.BaseChatModel Create a new model by parsing and validating input data from keyword arguments. chat_models.base.SimpleChatModel Create a new model by parsing and validating input data from keyword arguments. chat_models.fake.FakeListChatModel Fake ChatModel for testing purposes. chat_models.google_palm.ChatGooglePalm Wrapper around Google's PaLM Chat API. chat_models.google_palm.ChatGooglePalmError Error raised when there is an issue with the Google PaLM API. chat_models.human.HumanInputChatModel ChatModel wrapper which returns user input as the response.. chat_models.openai.ChatOpenAI Wrapper around OpenAI Chat large language models. chat_models.promptlayer_openai.PromptLayerChatOpenAI Wrapper around OpenAI Chat large language models and PromptLayer. chat_models.vertexai.ChatVertexAI Wrapper around Vertex AI large language models. Functions¶ chat_models.google_palm.chat_with_retry(llm, ...) Use tenacity to retry the completion call. langchain.client: Client¶ LangChain + Client. Classes¶ client.runner_utils.InputFormatError Raised when the input format is invalid. Functions¶ client.runner_utils.run_llm(llm, inputs, ...) Run the language model on the example.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-18
Run the language model on the example. client.runner_utils.run_llm_or_chain(...[, ...]) Run the Chain or language model synchronously. client.runner_utils.run_on_dataset(...[, ...]) Run the Chain or language model on a dataset and store traces to the specified project name. client.runner_utils.run_on_examples(...[, ...]) Run the Chain or language model on examples and store traces to the specified project name. langchain.docstore: Docstore¶ Wrappers on top of docstores. Classes¶ docstore.arbitrary_fn.DocstoreFn(lookup_fn) Langchain Docstore via arbitrary lookup function. docstore.base.AddableMixin() Mixin class that supports adding texts. docstore.base.Docstore() Interface to access to place that stores documents. docstore.in_memory.InMemoryDocstore([_dict]) Simple in memory docstore in the form of a dict. docstore.wikipedia.Wikipedia() Wrapper around wikipedia API. langchain.document_loaders: Document Loaders¶ All different types of document loaders. Classes¶ document_loaders.acreom.AcreomLoader(path[, ...]) Loader that loads acreom vault from a directory. document_loaders.airbyte_json.AirbyteJSONLoader(...) Loader that loads local airbyte json files. document_loaders.airtable.AirtableLoader(...) Loader for Airtable tables. document_loaders.apify_dataset.ApifyDatasetLoader Loading Documents from Apify datasets. document_loaders.arxiv.ArxivLoader(query[, ...]) Loads a query result from arxiv.org into a list of Documents. document_loaders.azlyrics.AZLyricsLoader(...) Loader that loads AZLyrics webpages. document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader(...) Loading Documents from Azure Blob Storage.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-19
Loading Documents from Azure Blob Storage. document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader(...) Loading Documents from Azure Blob Storage. document_loaders.base.BaseBlobParser() Abstract interface for blob parsers. document_loaders.base.BaseLoader() Interface for loading Documents. document_loaders.bibtex.BibtexLoader(...[, ...]) Loads a bibtex file into a list of Documents. document_loaders.bigquery.BigQueryLoader(query) Loads a query result from BigQuery into a list of documents. document_loaders.bilibili.BiliBiliLoader(...) Loader that loads bilibili transcripts. document_loaders.blackboard.BlackboardLoader(...) Loads all documents from a Blackboard course. document_loaders.blob_loaders.file_system.FileSystemBlobLoader(path, *) Blob loader for the local file system. document_loaders.blob_loaders.schema.Blob A blob is used to represent raw data by either reference or value. document_loaders.blob_loaders.schema.BlobLoader() Abstract interface for blob loaders implementation. document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader(...) Load YouTube urls as audio file(s). document_loaders.blockchain.BlockchainDocumentLoader(...) Loads elements from a blockchain smart contract into Langchain documents. document_loaders.blockchain.BlockchainType(value) Enumerator of the supported blockchains. document_loaders.brave_search.BraveSearchLoader(...) Loads a query result from Brave Search engine into a list of Documents. document_loaders.chatgpt.ChatGPTLoader(log_file) Load conversations from exported ChatGPT data. document_loaders.college_confidential.CollegeConfidentialLoader(...) Loader that loads College Confidential webpages. document_loaders.confluence.ConfluenceLoader(url) Load Confluence pages. document_loaders.confluence.ContentFormat(value)
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-20
Load Confluence pages. document_loaders.confluence.ContentFormat(value) Enumerator of the content formats of Confluence page. document_loaders.conllu.CoNLLULoader(file_path) Load CoNLL-U files. document_loaders.csv_loader.CSVLoader(file_path) Loads a CSV file into a list of documents. document_loaders.csv_loader.UnstructuredCSVLoader(...) Loader that uses unstructured to load CSV files. document_loaders.cube_semantic.CubeSemanticLoader(...) Load Cube semantic layer metadata. document_loaders.dataframe.DataFrameLoader(...) Load Pandas DataFrame. document_loaders.diffbot.DiffbotLoader(...) Loads Diffbot file json. document_loaders.directory.DirectoryLoader(...) Load documents from a directory. document_loaders.discord.DiscordChatLoader(...) Load Discord chat logs. document_loaders.docugami.DocugamiLoader Loads processed docs from Docugami. document_loaders.duckdb_loader.DuckDBLoader(query) Loads a query result from DuckDB into a list of documents. document_loaders.email.OutlookMessageLoader(...) Loads Outlook Message files using extract_msg. document_loaders.email.UnstructuredEmailLoader(...) Loader that uses unstructured to load email files. document_loaders.embaas.BaseEmbaasLoader Base class for embedding a model into an Embaas document extraction API. document_loaders.embaas.EmbaasBlobLoader Embaas's document byte loader. document_loaders.embaas.EmbaasDocumentExtractionParameters Parameters for the embaas document extraction API. document_loaders.embaas.EmbaasDocumentExtractionPayload Payload for the Embaas document extraction API. document_loaders.embaas.EmbaasLoader Embaas's document loader.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-21
document_loaders.embaas.EmbaasLoader Embaas's document loader. document_loaders.epub.UnstructuredEPubLoader(...) Loader that uses unstructured to load epub files. document_loaders.evernote.EverNoteLoader(...) EverNote Loader. document_loaders.excel.UnstructuredExcelLoader(...) Loader that uses unstructured to load Microsoft Excel files. document_loaders.facebook_chat.FacebookChatLoader(path) Loads Facebook messages json directory dump. document_loaders.fauna.FaunaLoader(query, ...) FaunaDB Loader. document_loaders.figma.FigmaFileLoader(...) Loads Figma file json. document_loaders.gcs_directory.GCSDirectoryLoader(...) Loads Documents from GCS. document_loaders.gcs_file.GCSFileLoader(...) Load Documents from a GCS file. document_loaders.generic.GenericLoader(...) A generic document loader. document_loaders.git.GitLoader(repo_path[, ...]) Loads files from a Git repository into a list of documents. document_loaders.gitbook.GitbookLoader(web_page) Load GitBook data. document_loaders.github.BaseGitHubLoader Load issues of a GitHub repository. document_loaders.github.GitHubIssuesLoader Load issues of a GitHub repository. document_loaders.googledrive.GoogleDriveLoader Loads Google Docs from Google Drive. document_loaders.gutenberg.GutenbergLoader(...) Loader that uses urllib to load .txt web files. document_loaders.helpers.FileEncoding(...) A file encoding as the NamedTuple. document_loaders.hn.HNLoader(web_path[, ...]) Load Hacker News data from either main page results or the comments page. document_loaders.html.UnstructuredHTMLLoader(...) Loader that uses unstructured to load HTML files. document_loaders.html_bs.BSHTMLLoader(file_path)
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-22
document_loaders.html_bs.BSHTMLLoader(file_path) Loader that uses beautiful soup to parse HTML files. document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader(path) Load Documents from the Hugging Face Hub. document_loaders.ifixit.IFixitLoader(web_path) Load iFixit repair guides, device wikis and answers. document_loaders.image.UnstructuredImageLoader(...) Loader that uses unstructured to load image files, such as PNGs and JPGs. document_loaders.image_captions.ImageCaptionLoader(...) Loads the captions of an image document_loaders.imsdb.IMSDbLoader(web_path) Loads IMSDb webpages. document_loaders.iugu.IuguLoader(resource[, ...]) Loader that fetches data from IUGU. document_loaders.joplin.JoplinLoader([...]) Loader that fetches notes from Joplin. document_loaders.json_loader.JSONLoader(...) Loads a JSON file using a jq schema. document_loaders.larksuite.LarkSuiteDocLoader(...) Loads LarkSuite (FeiShu) document. document_loaders.markdown.UnstructuredMarkdownLoader(...) Loader that uses unstructured to load markdown files. document_loaders.mastodon.MastodonTootsLoader(...) Mastodon toots loader. document_loaders.max_compute.MaxComputeLoader(...) Loads a query result from Alibaba Cloud MaxCompute table into documents. document_loaders.mediawikidump.MWDumpLoader(...) Load MediaWiki dump from XML file . document_loaders.merge.MergedDataLoader(loaders) Merge documents from a list of loaders document_loaders.mhtml.MHTMLLoader(file_path) Loader that uses beautiful soup to parse HTML files. document_loaders.modern_treasury.ModernTreasuryLoader(...) Loader that fetches data from Modern Treasury.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-23
Loader that fetches data from Modern Treasury. document_loaders.notebook.NotebookLoader(path) Loader that loads .ipynb notebook files. document_loaders.notion.NotionDirectoryLoader(path) Loader that loads Notion directory dump. document_loaders.notiondb.NotionDBLoader(...) Notion DB Loader. document_loaders.obsidian.ObsidianLoader(path) Loader that loads Obsidian files from disk. document_loaders.odt.UnstructuredODTLoader(...) Loader that uses unstructured to load open office ODT files. document_loaders.onedrive.OneDriveLoader Create a new model by parsing and validating input data from keyword arguments. document_loaders.onedrive_file.OneDriveFileLoader Create a new model by parsing and validating input data from keyword arguments. document_loaders.open_city_data.OpenCityDataLoader(...) Loader that loads Open city data. document_loaders.org_mode.UnstructuredOrgModeLoader(...) Loader that uses unstructured to load Org-Mode files. document_loaders.parsers.audio.OpenAIWhisperParser() Transcribe and parse audio files. document_loaders.parsers.generic.MimeTypeBasedParser(...) A parser that uses mime-types to determine how to parse a blob. document_loaders.parsers.grobid.GrobidParser(...) Loader that uses Grobid to load article PDF files. document_loaders.parsers.grobid.ServerUnavailableException document_loaders.parsers.html.bs4.BS4HTMLParser(*) Parser that uses beautiful soup to parse HTML files. document_loaders.parsers.language.code_segmenter.CodeSegmenter(code) The abstract class for the code segmenter. document_loaders.parsers.language.javascript.JavaScriptSegmenter(code) The code segmenter for JavaScript. document_loaders.parsers.language.language_parser.LanguageParser([...]) Language parser that split code using the respective language syntax.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-24
Language parser that split code using the respective language syntax. document_loaders.parsers.language.python.PythonSegmenter(code) The code segmenter for Python. document_loaders.parsers.pdf.PDFMinerParser() Parse PDFs with PDFMiner. document_loaders.parsers.pdf.PDFPlumberParser([...]) Parse PDFs with PDFPlumber. document_loaders.parsers.pdf.PyMuPDFParser([...]) Parse PDFs with PyMuPDF. document_loaders.parsers.pdf.PyPDFParser([...]) Loads a PDF with pypdf and chunks at character level. document_loaders.parsers.pdf.PyPDFium2Parser() Parse PDFs with PyPDFium2. document_loaders.parsers.txt.TextParser() Parser for text blobs. document_loaders.pdf.BasePDFLoader(file_path) Base loader class for PDF files. document_loaders.pdf.MathpixPDFLoader(file_path) Initialize with file path. document_loaders.pdf.OnlinePDFLoader(file_path) Loader that loads online PDFs. document_loaders.pdf.PDFMinerLoader(file_path) Loader that uses PDFMiner to load PDF files. document_loaders.pdf.PDFMinerPDFasHTMLLoader(...) Loader that uses PDFMiner to load PDF files as HTML content. document_loaders.pdf.PDFPlumberLoader(file_path) Loader that uses pdfplumber to load PDF files. document_loaders.pdf.PyMuPDFLoader(file_path) Loader that uses PyMuPDF to load PDF files. document_loaders.pdf.PyPDFDirectoryLoader(path) Loads a directory with PDF files with pypdf and chunks at character level. document_loaders.pdf.PyPDFLoader(file_path) Loads a PDF with pypdf and chunks at character level. document_loaders.pdf.PyPDFium2Loader(file_path)
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-25
document_loaders.pdf.PyPDFium2Loader(file_path) Loads a PDF with pypdfium2 and chunks at character level. document_loaders.pdf.UnstructuredPDFLoader(...) Loader that uses unstructured to load PDF files. document_loaders.powerpoint.UnstructuredPowerPointLoader(...) Loader that uses unstructured to load powerpoint files. document_loaders.psychic.PsychicLoader(...) Loader that loads documents from Psychic.dev. document_loaders.pyspark_dataframe.PySparkDataFrameLoader([...]) Load PySpark DataFrames document_loaders.python.PythonLoader(file_path) Load Python files, respecting any non-default encoding if specified. document_loaders.readthedocs.ReadTheDocsLoader(path) Loader that loads ReadTheDocs documentation directory dump. document_loaders.recursive_url_loader.RecursiveUrlLoader(url) Loader that loads all child links from a given url. document_loaders.reddit.RedditPostsLoader(...) Reddit posts loader. document_loaders.roam.RoamLoader(path) Loader that loads Roam files from disk. document_loaders.rst.UnstructuredRSTLoader(...) Loader that uses unstructured to load RST files. document_loaders.rtf.UnstructuredRTFLoader(...) Loader that uses unstructured to load rtf files. document_loaders.s3_directory.S3DirectoryLoader(bucket) Loading logic for loading documents from s3. document_loaders.s3_file.S3FileLoader(...) Loading logic for loading documents from s3. document_loaders.sitemap.SitemapLoader(web_path) Loader that fetches a sitemap and loads those URLs. document_loaders.slack_directory.SlackDirectoryLoader(...) Loader for loading documents from a Slack directory dump. document_loaders.snowflake_loader.SnowflakeLoader(...) Loads a query result from Snowflake into a list of documents.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-26
Loads a query result from Snowflake into a list of documents. document_loaders.spreedly.SpreedlyLoader(...) Loader that fetches data from Spreedly API. document_loaders.srt.SRTLoader(file_path) Loader for .srt (subtitle) files. document_loaders.stripe.StripeLoader(resource) Loader that fetches data from Stripe. document_loaders.telegram.TelegramChatApiLoader([...]) Loader that loads Telegram chat json directory dump. document_loaders.telegram.TelegramChatFileLoader(path) Loader that loads Telegram chat json directory dump. document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader(...) Loading logic for loading documents from Tencent Cloud COS. document_loaders.tencent_cos_file.TencentCOSFileLoader(...) Loading logic for loading documents from Tencent Cloud COS. document_loaders.text.TextLoader(file_path) Load text files. document_loaders.tomarkdown.ToMarkdownLoader(...) Loader that loads HTML to markdown using 2markdown. document_loaders.toml.TomlLoader(source) A TOML document loader that inherits from the BaseLoader class. document_loaders.trello.TrelloLoader(client, ...) Trello loader. document_loaders.twitter.TwitterTweetLoader(...) Twitter tweets loader. document_loaders.unstructured.UnstructuredAPIFileIOLoader(file) UnstructuredAPIFileIOLoader uses the Unstructured API to load files. document_loaders.unstructured.UnstructuredAPIFileLoader([...]) UnstructuredAPIFileLoader uses the Unstructured API to load files. document_loaders.unstructured.UnstructuredBaseLoader([mode]) Loader that uses unstructured to load files. document_loaders.unstructured.UnstructuredFileIOLoader(file) UnstructuredFileIOLoader uses unstructured to load files. document_loaders.unstructured.UnstructuredFileLoader(...)
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-27
document_loaders.unstructured.UnstructuredFileLoader(...) UnstructuredFileLoader uses unstructured to load files. document_loaders.url.UnstructuredURLLoader(urls) Loader that uses unstructured to load HTML files. document_loaders.url_playwright.PlaywrightURLLoader(urls) Loader that uses Playwright and to load a page and unstructured to load the html. document_loaders.url_selenium.SeleniumURLLoader(urls) Loader that uses Selenium and to load a page and unstructured to load the html. document_loaders.weather.WeatherDataLoader(...) Weather Reader. document_loaders.web_base.WebBaseLoader(web_path) Loader that uses urllib and beautiful soup to load webpages. document_loaders.whatsapp_chat.WhatsAppChatLoader(path) Loader that loads WhatsApp messages text file. document_loaders.wikipedia.WikipediaLoader(query) Loads a query result from www.wikipedia.org into a list of Documents. document_loaders.word_document.Docx2txtLoader(...) Loads a DOCX with docx2txt and chunks at character level. document_loaders.word_document.UnstructuredWordDocumentLoader(...) Loader that uses unstructured to load word documents. document_loaders.xml.UnstructuredXMLLoader(...) Loader that uses unstructured to load XML files. document_loaders.youtube.GoogleApiYoutubeLoader(...) Loader that loads all Videos from a Channel document_loaders.youtube.YoutubeLoader(video_id) Loader that loads Youtube transcripts. Functions¶ document_loaders.chatgpt.concatenate_rows(...) Combine message information in a readable format ready to be used. document_loaders.facebook_chat.concatenate_rows(row) Combine message information in a readable format ready to be used. document_loaders.helpers.detect_file_encodings(...) Try to detect the file encoding. document_loaders.notebook.concatenate_cells(...) Combine cells information in a readable format ready to be used.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-28
Combine cells information in a readable format ready to be used. document_loaders.notebook.remove_newlines(x) Remove recursively newlines, no matter the data structure they are stored in. document_loaders.parsers.registry.get_parser(...) Get a parser by parser name. document_loaders.telegram.concatenate_rows(row) Combine message information in a readable format ready to be used. document_loaders.telegram.text_to_docs(text) Converts a string or list of strings to a list of Documents with metadata. document_loaders.unstructured.get_elements_from_api([...]) Retrieves a list of elements from the Unstructured API. document_loaders.unstructured.satisfies_min_unstructured_version(...) Checks to see if the installed unstructured version exceeds the minimum version for the feature in question. document_loaders.unstructured.validate_unstructured_version(...) Raises an error if the unstructured version does not exceed the specified minimum. document_loaders.whatsapp_chat.concatenate_rows(...) Combine message information in a readable format ready to be used. langchain.document_transformers: Document Transformers¶ Transform documents Classes¶ document_transformers.EmbeddingsRedundantFilter Filter that drops redundant documents by comparing their embeddings. Functions¶ document_transformers.get_stateful_documents(...) Convert a list of documents to a list of documents with state. langchain.embeddings: Embeddings¶ Wrappers around embedding modules. Classes¶ embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding Wrapper for Aleph Alpha's Asymmetric Embeddings AA provides you with an endpoint to embed a document and a query. embeddings.aleph_alpha.AlephAlphaSymmetricSemanticEmbedding The symmetric version of the Aleph Alpha's semantic embeddings. embeddings.base.Embeddings() Interface for embedding models. embeddings.bedrock.BedrockEmbeddings
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-29
Interface for embedding models. embeddings.bedrock.BedrockEmbeddings Embeddings provider to invoke Bedrock embedding models. embeddings.clarifai.ClarifaiEmbeddings Wrapper around Clarifai embedding models. embeddings.cohere.CohereEmbeddings Wrapper around Cohere embedding models. embeddings.dashscope.DashScopeEmbeddings Wrapper around DashScope embedding models. embeddings.deepinfra.DeepInfraEmbeddings Wrapper around Deep Infra's embedding inference service. embeddings.elasticsearch.ElasticsearchEmbeddings(...) Wrapper around Elasticsearch embedding models. embeddings.embaas.EmbaasEmbeddings Wrapper around embaas's embedding service. embeddings.embaas.EmbaasEmbeddingsPayload Payload for the embaas embeddings API. embeddings.fake.FakeEmbeddings Create a new model by parsing and validating input data from keyword arguments. embeddings.google_palm.GooglePalmEmbeddings Create a new model by parsing and validating input data from keyword arguments. embeddings.huggingface.HuggingFaceEmbeddings Wrapper around sentence_transformers embedding models. embeddings.huggingface.HuggingFaceInstructEmbeddings Wrapper around sentence_transformers embedding models. embeddings.huggingface_hub.HuggingFaceHubEmbeddings Wrapper around HuggingFaceHub embedding models. embeddings.jina.JinaEmbeddings Create a new model by parsing and validating input data from keyword arguments. embeddings.llamacpp.LlamaCppEmbeddings Wrapper around llama.cpp embedding models. embeddings.minimax.MiniMaxEmbeddings Wrapper around MiniMax's embedding inference service. embeddings.modelscope_hub.ModelScopeEmbeddings Wrapper around modelscope_hub embedding models. embeddings.mosaicml.MosaicMLInstructorEmbeddings Wrapper around MosaicML's embedding inference service.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-30
Wrapper around MosaicML's embedding inference service. embeddings.octoai_embeddings.OctoAIEmbeddings Wrapper around OctoAI Compute Service embedding models. embeddings.openai.OpenAIEmbeddings Wrapper around OpenAI embedding models. embeddings.sagemaker_endpoint.EmbeddingsContentHandler() Content handler for LLM class. embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings Wrapper around custom Sagemaker Inference Endpoints. embeddings.self_hosted.SelfHostedEmbeddings Runs custom embedding models on self-hosted remote hardware. embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings Runs sentence_transformers embedding models on self-hosted remote hardware. embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceInstructEmbeddings Runs InstructorEmbedding embedding models on self-hosted remote hardware. embeddings.spacy_embeddings.SpacyEmbeddings SpacyEmbeddings is a class for generating embeddings using the Spacy library. embeddings.tensorflow_hub.TensorflowHubEmbeddings Wrapper around tensorflow_hub embedding models. embeddings.vertexai.VertexAIEmbeddings Create a new model by parsing and validating input data from keyword arguments. Functions¶ embeddings.dashscope.embed_with_retry(...) Use tenacity to retry the embedding call. embeddings.google_palm.embed_with_retry(...) Use tenacity to retry the completion call. embeddings.minimax.embed_with_retry(...) Use tenacity to retry the completion call. embeddings.openai.embed_with_retry(...) Use tenacity to retry the embedding call. embeddings.self_hosted_hugging_face.load_embedding_model(...) Load the embedding model. langchain.env: Env¶ Functions¶ env.get_runtime_environment() Get information about the environment. langchain.evaluation: Evaluation¶ Evaluation chains for grading LLM and Chain outputs.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-31
langchain.evaluation: Evaluation¶ Evaluation chains for grading LLM and Chain outputs. This module contains off-the-shelf evaluation chains for grading the output of LangChain primitives such as language models and chains. To load an evaluator, you can use the load_evaluators function with the names of the evaluators to load. To load one of the LangChain HuggingFace datasets, you can use the load_dataset function with the name of the dataset to load. Some common use cases for evaluation include: Grading the accuracy of a response against ground truth answers: QAEvalChain Comparing the output of two models: PairwiseStringEvalChain Judging the efficacy of an agent’s tool usage: TrajectoryEvalChain Checking whether an output complies with a set of criteria: CriteriaEvalChain This module also contains low-level APIs for creating custom evaluators for specific evaluation tasks. These include: StringEvaluator: Evaluate a prediction string against a reference label and/or input context. PairwiseStringEvaluator: Evaluate two prediction strings against each other.Useful for scoring preferences, measuring similarity between two chain or llm agents, or comparing outputs on similar inputs. AgentTrajectoryEvaluator: Evaluate the full sequence of actionstaken by an agent. Classes¶ evaluation.agents.trajectory_eval_chain.TrajectoryEval(...) Create new instance of TrajectoryEval(score, reasoning) evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain A chain for evaluating ReAct style agents. evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser Create a new model by parsing and validating input data from keyword arguments. evaluation.comparison.eval_chain.PairwiseStringEvalChain A chain for comparing the output of two models. evaluation.comparison.eval_chain.PairwiseStringResultOutputParser A parser for the output of the PairwiseStringEvalChain.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-32
A parser for the output of the PairwiseStringEvalChain. evaluation.criteria.eval_chain.CriteriaEvalChain LLM Chain for evaluating runs against criteria. evaluation.criteria.eval_chain.CriteriaResultOutputParser A parser for the output of the CriteriaEvalChain. evaluation.qa.eval_chain.ContextQAEvalChain LLM Chain specifically for evaluating QA w/o GT based on context evaluation.qa.eval_chain.CotQAEvalChain LLM Chain specifically for evaluating QA using chain of thought reasoning. evaluation.qa.eval_chain.QAEvalChain LLM Chain specifically for evaluating question answering. evaluation.qa.generate_chain.QAGenerateChain LLM Chain specifically for generating examples for question answering. evaluation.run_evaluators.base.RunEvaluatorChain Evaluate Run and optional examples. evaluation.run_evaluators.base.RunEvaluatorOutputParser Parse the output of a run. evaluation.run_evaluators.implementations.ChoicesOutputParser Parse a feedback run with optional choices. evaluation.run_evaluators.implementations.CriteriaOutputParser Parse a criteria results into an evaluation result. evaluation.run_evaluators.implementations.StringRunEvaluatorInputMapper Maps the Run and Optional[Example] to a dictionary. evaluation.run_evaluators.implementations.TrajectoryInputMapper Maps the Run and Optional[Example] to a dictionary. evaluation.run_evaluators.implementations.TrajectoryRunEvalOutputParser Create a new model by parsing and validating input data from keyword arguments. evaluation.schema.AgentTrajectoryEvaluator() Interface for evaluating agent trajectories. evaluation.schema.EvaluatorType(value[, ...]) The types of the evaluators. evaluation.schema.LLMEvalChain A base class for evaluators that use an LLM. evaluation.schema.PairwiseStringEvaluator() A protocol for comparing the output of two models. evaluation.schema.StringEvaluator() Protocol for evaluating strings.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-33
evaluation.schema.StringEvaluator() Protocol for evaluating strings. Functions¶ evaluation.loading.load_dataset(uri) Load a dataset from the LangChainDatasets HuggingFace org. evaluation.loading.load_evaluator(evaluator, *) Load the requested evaluation chain specified by a string. evaluation.loading.load_evaluators(evaluators, *) Load evaluators specified by a list of evaluator types. evaluation.run_evaluators.implementations.get_criteria_evaluator(...) Get an eval chain for grading a model's response against a map of criteria. evaluation.run_evaluators.implementations.get_qa_evaluator(llm, *) Get an eval chain that compares response against ground truth. evaluation.run_evaluators.implementations.get_trajectory_evaluator(...) Get an eval chain for grading a model's response against a map of criteria. langchain.example_generator: Example Generator¶ Utility functions for working with prompts. Functions¶ example_generator.generate_example(examples, ...) Return another example given a list of examples for a prompt. langchain.experimental: Experimental¶ Classes¶ experimental.autonomous_agents.autogpt.memory.AutoGPTMemory Create a new model by parsing and validating input data from keyword arguments. experimental.autonomous_agents.autogpt.output_parser.AutoGPTAction(...) Create new instance of AutoGPTAction(name, args) experimental.autonomous_agents.autogpt.output_parser.AutoGPTOutputParser Create a new model by parsing and validating input data from keyword arguments. experimental.autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser Create a new model by parsing and validating input data from keyword arguments. experimental.autonomous_agents.autogpt.prompt.AutoGPTPrompt Create a new model by parsing and validating input data from keyword arguments. experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-34
experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI Controller model for the BabyAGI agent. experimental.autonomous_agents.baby_agi.task_creation.TaskCreationChain Chain to generates tasks. experimental.autonomous_agents.baby_agi.task_execution.TaskExecutionChain Chain to execute tasks. experimental.autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain Chain to prioritize tasks. experimental.generative_agents.generative_agent.GenerativeAgent A character with memory and innate characteristics. experimental.generative_agents.memory.GenerativeAgentMemory Create a new model by parsing and validating input data from keyword arguments. experimental.llms.jsonformer_decoder.JsonFormer Create a new model by parsing and validating input data from keyword arguments. experimental.llms.rellm_decoder.RELLM Create a new model by parsing and validating input data from keyword arguments. experimental.plan_and_execute.agent_executor.PlanAndExecute Create a new model by parsing and validating input data from keyword arguments. experimental.plan_and_execute.executors.base.BaseExecutor Create a new model by parsing and validating input data from keyword arguments. experimental.plan_and_execute.executors.base.ChainExecutor Create a new model by parsing and validating input data from keyword arguments. experimental.plan_and_execute.planners.base.BasePlanner Create a new model by parsing and validating input data from keyword arguments. experimental.plan_and_execute.planners.base.LLMPlanner Create a new model by parsing and validating input data from keyword arguments. experimental.plan_and_execute.planners.chat_planner.PlanningOutputParser Create a new model by parsing and validating input data from keyword arguments. experimental.plan_and_execute.schema.BaseStepContainer Create a new model by parsing and validating input data from keyword arguments. experimental.plan_and_execute.schema.ListStepContainer Create a new model by parsing and validating input data from keyword arguments. experimental.plan_and_execute.schema.Plan
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-35
experimental.plan_and_execute.schema.Plan Create a new model by parsing and validating input data from keyword arguments. experimental.plan_and_execute.schema.PlanOutputParser Create a new model by parsing and validating input data from keyword arguments. experimental.plan_and_execute.schema.Step Create a new model by parsing and validating input data from keyword arguments. experimental.plan_and_execute.schema.StepResponse Create a new model by parsing and validating input data from keyword arguments. Functions¶ experimental.autonomous_agents.autogpt.output_parser.preprocess_json_input(...) Preprocesses a string to be parsed as json. experimental.autonomous_agents.autogpt.prompt_generator.get_prompt(tools) This function generates a prompt string. experimental.llms.jsonformer_decoder.import_jsonformer() Lazily import jsonformer. experimental.llms.rellm_decoder.import_rellm() Lazily import rellm. experimental.plan_and_execute.executors.agent_executor.load_agent_executor(...) Load an agent executor. experimental.plan_and_execute.planners.chat_planner.load_chat_planner(llm) Load a chat planner. langchain.formatting: Formatting¶ Utilities for formatting strings. Classes¶ formatting.StrictFormatter() A subclass of formatter that checks for extra keys. langchain.graphs: Graphs¶ Graph implementations. Classes¶ graphs.networkx_graph.KnowledgeTriple(...) A triple in the graph. Functions¶ graphs.networkx_graph.get_entities(entity_str) Extract entities from entity string. graphs.networkx_graph.parse_triples(...) Parse knowledge triples from the knowledge string. langchain.indexes: Indexes¶ All index utils. Classes¶ indexes.graph.GraphIndexCreator Functionality to create graph index. indexes.vectorstore.VectorStoreIndexWrapper Wrapper around a vectorstore for easy access. indexes.vectorstore.VectorstoreIndexCreator Logic for creating indexes.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-36
indexes.vectorstore.VectorstoreIndexCreator Logic for creating indexes. langchain.input: Input¶ Handle chained inputs. Functions¶ input.get_bolded_text(text) Get bolded text. input.get_color_mapping(items[, excluded_colors]) Get mapping for items to a support color. input.get_colored_text(text, color) Get colored text. input.print_text(text[, color, end, file]) Print text with highlighting and no end characters. langchain.llms: LLMs¶ Wrappers on top of large language models APIs. Classes¶ llms.ai21.AI21 Wrapper around AI21 large language models. llms.ai21.AI21PenaltyData Parameters for AI21 penalty data. llms.aleph_alpha.AlephAlpha Wrapper around Aleph Alpha large language models. llms.amazon_api_gateway.AmazonAPIGateway Wrapper around custom Amazon API Gateway llms.anthropic.Anthropic Wrapper around Anthropic's large language models. llms.anyscale.Anyscale Wrapper around Anyscale Services. llms.aviary.Aviary Allow you to use an Aviary. llms.azureml_endpoint.AzureMLEndpointClient(...) Wrapper around AzureML Managed Online Endpoint Client. llms.azureml_endpoint.AzureMLOnlineEndpoint Wrapper around Azure ML Hosted models using Managed Online Endpoints. llms.azureml_endpoint.DollyContentFormatter() Content handler for the Dolly-v2-12b model llms.azureml_endpoint.HFContentFormatter() Content handler for LLMs from the HuggingFace catalog. llms.azureml_endpoint.OSSContentFormatter() Content handler for LLMs from the OSS catalog. llms.bananadev.Banana Wrapper around Banana large language models. llms.base.BaseLLM
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-37
Wrapper around Banana large language models. llms.base.BaseLLM LLM wrapper should take in a prompt and return a string. llms.base.LLM LLM class that expect subclasses to implement a simpler call method. llms.baseten.Baseten Use your Baseten models in Langchain llms.beam.Beam Wrapper around Beam API for gpt2 large language model. llms.bedrock.Bedrock LLM provider to invoke Bedrock models. llms.cerebriumai.CerebriumAI Wrapper around CerebriumAI large language models. llms.clarifai.Clarifai Wrapper around Clarifai's large language models. llms.cohere.Cohere Wrapper around Cohere large language models. llms.ctransformers.CTransformers Wrapper around the C Transformers LLM interface. llms.databricks.Databricks LLM wrapper around a Databricks serving endpoint or a cluster driver proxy app. llms.deepinfra.DeepInfra Wrapper around DeepInfra deployed models. llms.fake.FakeListLLM Fake LLM wrapper for testing purposes. llms.forefrontai.ForefrontAI Wrapper around ForefrontAI large language models. llms.google_palm.GooglePalm Create a new model by parsing and validating input data from keyword arguments. llms.gooseai.GooseAI Wrapper around OpenAI large language models. llms.gpt4all.GPT4All Wrapper around GPT4All language models. llms.huggingface_endpoint.HuggingFaceEndpoint Wrapper around HuggingFaceHub Inference Endpoints. llms.huggingface_hub.HuggingFaceHub Wrapper around HuggingFaceHub models. llms.huggingface_pipeline.HuggingFacePipeline Wrapper around HuggingFace Pipeline API.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-38
llms.huggingface_pipeline.HuggingFacePipeline Wrapper around HuggingFace Pipeline API. llms.huggingface_text_gen_inference.HuggingFaceTextGenInference HuggingFace text generation inference API. llms.human.HumanInputLLM A LLM wrapper which returns user input as the response. llms.llamacpp.LlamaCpp Wrapper around the llama.cpp model. llms.manifest.ManifestWrapper Wrapper around HazyResearch's Manifest library. llms.modal.Modal Wrapper around Modal large language models. llms.mosaicml.MosaicML Wrapper around MosaicML's LLM inference service. llms.nlpcloud.NLPCloud Wrapper around NLPCloud large language models. llms.octoai_endpoint.OctoAIEndpoint Wrapper around OctoAI Inference Endpoints. llms.openai.AzureOpenAI Wrapper around Azure-specific OpenAI large language models. llms.openai.BaseOpenAI Wrapper around OpenAI large language models. llms.openai.OpenAI Wrapper around OpenAI large language models. llms.openai.OpenAIChat Wrapper around OpenAI Chat large language models. llms.openllm.IdentifyingParams Parameters for identifying a model as a typed dict. llms.openllm.OpenLLM Wrapper for accessing OpenLLM, supporting both in-process model instance and remote OpenLLM servers. llms.openlm.OpenLM Create a new model by parsing and validating input data from keyword arguments. llms.petals.Petals Wrapper around Petals Bloom models. llms.pipelineai.PipelineAI Wrapper around PipelineAI large language models. llms.predictionguard.PredictionGuard Wrapper around Prediction Guard large language models. llms.promptlayer_openai.PromptLayerOpenAI Wrapper around OpenAI large language models.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-39
Wrapper around OpenAI large language models. llms.promptlayer_openai.PromptLayerOpenAIChat Wrapper around OpenAI large language models. llms.replicate.Replicate Wrapper around Replicate models. llms.rwkv.RWKV Wrapper around RWKV language models. llms.sagemaker_endpoint.ContentHandlerBase() A handler class to transform input from LLM to a format that SageMaker endpoint expects. llms.sagemaker_endpoint.LLMContentHandler() Content handler for LLM class. llms.sagemaker_endpoint.SagemakerEndpoint Wrapper around custom Sagemaker Inference Endpoints. llms.self_hosted.SelfHostedPipeline Run model inference on self-hosted remote hardware. llms.self_hosted_hugging_face.SelfHostedHuggingFaceLLM Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware. llms.stochasticai.StochasticAI Wrapper around StochasticAI large language models. llms.textgen.TextGen Wrapper around the text-generation-webui model. llms.vertexai.VertexAI Wrapper around Google Vertex AI large language models. llms.writer.Writer Wrapper around Writer large language models. Functions¶ llms.aviary.get_completions(model, prompt[, ...]) Get completions from Aviary models. llms.aviary.get_models() List available models llms.base.create_base_retry_decorator(...[, ...]) Create a retry decorator for a given LLM and provided list of error types. llms.base.get_prompts(params, prompts) Get prompts that are already cached. llms.base.update_cache(existing_prompts, ...) Update the cache and get the LLM output. llms.cohere.completion_with_retry(llm, **kwargs)
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-40
llms.cohere.completion_with_retry(llm, **kwargs) Use tenacity to retry the completion call. llms.databricks.get_default_api_token() Gets the default Databricks personal access token. llms.databricks.get_default_host() Gets the default Databricks workspace hostname. llms.databricks.get_repl_context() Gets the notebook REPL context if running inside a Databricks notebook. llms.google_palm.generate_with_retry(llm, ...) Use tenacity to retry the completion call. llms.loading.load_llm(file) Load LLM from file. llms.loading.load_llm_from_config(config) Load LLM from Config Dict. llms.openai.completion_with_retry(llm, **kwargs) Use tenacity to retry the completion call. llms.openai.update_token_usage(keys, ...) Update token usage. llms.utils.enforce_stop_tokens(text, stop) Cut off the text as soon as any stop words occur. llms.vertexai.completion_with_retry(llm, ...) Use tenacity to retry the completion call. llms.vertexai.is_codey_model(model_name) Returns True if the model name is a Codey model. langchain.load: Load¶ Classes¶ load.serializable.BaseSerialized Base class for serialized objects. load.serializable.Serializable Serializable base class. load.serializable.SerializedConstructor Serialized constructor. load.serializable.SerializedNotImplemented Serialized not implemented. load.serializable.SerializedSecret Serialized secret. Functions¶ load.dump.default(obj) Return a default value for a Serializable object or a SerializedNotImplemented object. load.dump.dumpd(obj) Return a json dict representation of an object. load.dump.dumps(obj, *[, pretty])
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-41
load.dump.dumps(obj, *[, pretty]) Return a json string representation of an object. load.load.loads(text, *[, secrets_map]) Load a JSON object from a string. load.serializable.to_json_not_implemented(obj) Serialize a "not implemented" object. langchain.math_utils: Math Utils¶ Math utils. Functions¶ math_utils.cosine_similarity(X, Y) Row-wise cosine similarity between two equal-width matrices. math_utils.cosine_similarity_top_k(X, Y[, ...]) Row-wise cosine similarity with optional top-k and score threshold filtering. langchain.memory: Memory¶ Classes¶ memory.buffer.ConversationBufferMemory Buffer for storing conversation memory. memory.buffer.ConversationStringBufferMemory Buffer for storing conversation memory. memory.buffer_window.ConversationBufferWindowMemory Buffer for storing conversation memory. memory.chat_memory.BaseChatMemory Create a new model by parsing and validating input data from keyword arguments. memory.chat_message_histories.cassandra.CassandraChatMessageHistory(...) Chat message history that stores history in Cassandra. memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory(...) Chat history backed by Azure CosmosDB. memory.chat_message_histories.dynamodb.DynamoDBChatMessageHistory(...) Chat message history that stores history in AWS DynamoDB. memory.chat_message_histories.file.FileChatMessageHistory(...) Chat message history that stores history in a local file. memory.chat_message_histories.firestore.FirestoreChatMessageHistory(...) Chat history backed by Google Firestore. memory.chat_message_histories.in_memory.ChatMessageHistory In memory implementation of chat message history. memory.chat_message_histories.momento.MomentoChatMessageHistory(...) Chat message history cache that uses Momento as a backend. memory.chat_message_histories.mongodb.MongoDBChatMessageHistory(...)
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-42
memory.chat_message_histories.mongodb.MongoDBChatMessageHistory(...) Chat message history that stores history in MongoDB. memory.chat_message_histories.postgres.PostgresChatMessageHistory(...) Chat message history stored in a Postgres database. memory.chat_message_histories.redis.RedisChatMessageHistory(...) Chat message history stored in a Redis database. memory.chat_message_histories.sql.SQLChatMessageHistory(...) Chat message history stored in an SQL database. memory.chat_message_histories.zep.ZepChatMessageHistory(...) A ChatMessageHistory implementation that uses Zep as a backend. memory.combined.CombinedMemory Class for combining multiple memories' data together. memory.entity.BaseEntityStore Create a new model by parsing and validating input data from keyword arguments. memory.entity.ConversationEntityMemory Entity extractor & summarizer memory. memory.entity.InMemoryEntityStore Basic in-memory entity store. memory.entity.RedisEntityStore Redis-backed Entity store. memory.entity.SQLiteEntityStore SQLite-backed Entity store memory.kg.ConversationKGMemory Knowledge graph memory for storing conversation memory. memory.motorhead_memory.MotorheadMemory Create a new model by parsing and validating input data from keyword arguments. memory.readonly.ReadOnlySharedMemory A memory wrapper that is read-only and cannot be changed. memory.simple.SimpleMemory Simple memory for storing context or other bits of information that shouldn't ever change between prompts. memory.summary.ConversationSummaryMemory Conversation summarizer to memory. memory.summary.SummarizerMixin Create a new model by parsing and validating input data from keyword arguments. memory.summary_buffer.ConversationSummaryBufferMemory Buffer with summarizer for storing conversation memory. memory.token_buffer.ConversationTokenBufferMemory Buffer for storing conversation memory. memory.vectorstore.VectorStoreRetrieverMemory Class for a VectorStore-backed memory object. Functions¶
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-43
Class for a VectorStore-backed memory object. Functions¶ memory.chat_message_histories.sql.create_message_model(...) Create a message model for a given table name. memory.utils.get_prompt_input_key(inputs, ...) Get the prompt input key. langchain.output_parsers: Output Parsers¶ Classes¶ output_parsers.boolean.BooleanOutputParser Create a new model by parsing and validating input data from keyword arguments. output_parsers.combining.CombiningOutputParser Class to combine multiple output parsers into one. output_parsers.datetime.DatetimeOutputParser Create a new model by parsing and validating input data from keyword arguments. output_parsers.enum.EnumOutputParser Create a new model by parsing and validating input data from keyword arguments. output_parsers.fix.OutputFixingParser Wraps a parser and tries to fix parsing errors. output_parsers.list.CommaSeparatedListOutputParser Parse out comma separated lists. output_parsers.list.ListOutputParser Class to parse the output of an LLM call to a list. output_parsers.openai_functions.JsonKeyOutputFunctionsParser Create a new model by parsing and validating input data from keyword arguments. output_parsers.openai_functions.JsonOutputFunctionsParser Create a new model by parsing and validating input data from keyword arguments. output_parsers.openai_functions.OutputFunctionsParser Create a new model by parsing and validating input data from keyword arguments. output_parsers.openai_functions.PydanticAttrOutputFunctionsParser Create a new model by parsing and validating input data from keyword arguments. output_parsers.openai_functions.PydanticOutputFunctionsParser Create a new model by parsing and validating input data from keyword arguments. output_parsers.pydantic.PydanticOutputParser Create a new model by parsing and validating input data from keyword arguments. output_parsers.rail_parser.GuardrailsOutputParser
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-44
output_parsers.rail_parser.GuardrailsOutputParser Create a new model by parsing and validating input data from keyword arguments. output_parsers.regex.RegexParser Class to parse the output into a dictionary. output_parsers.regex_dict.RegexDictParser Class to parse the output into a dictionary. output_parsers.retry.RetryOutputParser Wraps a parser and tries to fix parsing errors. output_parsers.retry.RetryWithErrorOutputParser Wraps a parser and tries to fix parsing errors. output_parsers.structured.ResponseSchema Create a new model by parsing and validating input data from keyword arguments. output_parsers.structured.StructuredOutputParser Create a new model by parsing and validating input data from keyword arguments. Functions¶ output_parsers.json.parse_and_check_json_markdown(...) Parse a JSON string from a Markdown string and check that it contains the expected keys. output_parsers.json.parse_json_markdown(...) Parse a JSON string from a Markdown string. output_parsers.loading.load_output_parser(config) Load output parser. langchain.prompts: Prompts¶ Prompt template classes. Classes¶ prompts.base.StringPromptTemplate String prompt should expose the format method, returning a prompt. prompts.base.StringPromptValue Create a new model by parsing and validating input data from keyword arguments. prompts.chat.AIMessagePromptTemplate Create a new model by parsing and validating input data from keyword arguments. prompts.chat.BaseChatPromptTemplate Create a new model by parsing and validating input data from keyword arguments. prompts.chat.BaseMessagePromptTemplate Create a new model by parsing and validating input data from keyword arguments. prompts.chat.BaseStringMessagePromptTemplate Create a new model by parsing and validating input data from keyword arguments. prompts.chat.ChatMessagePromptTemplate Create a new model by parsing and validating input data from keyword arguments.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-45
Create a new model by parsing and validating input data from keyword arguments. prompts.chat.ChatPromptTemplate Create a new model by parsing and validating input data from keyword arguments. prompts.chat.ChatPromptValue Create a new model by parsing and validating input data from keyword arguments. prompts.chat.HumanMessagePromptTemplate Create a new model by parsing and validating input data from keyword arguments. prompts.chat.MessagesPlaceholder Prompt template that assumes variable is already list of messages. prompts.chat.SystemMessagePromptTemplate Create a new model by parsing and validating input data from keyword arguments. prompts.example_selector.base.BaseExampleSelector() Interface for selecting examples to include in prompts. prompts.example_selector.length_based.LengthBasedExampleSelector Select examples based on length. prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector Select and order examples based on ngram overlap score (sentence_bleu score). prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector ExampleSelector that selects examples based on Max Marginal Relevance. prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector Example selector that selects examples based on SemanticSimilarity. prompts.few_shot.FewShotPromptTemplate Prompt template that contains few shot examples. prompts.few_shot_with_templates.FewShotPromptWithTemplates Prompt template that contains few shot examples. prompts.pipeline.PipelinePromptTemplate A prompt template for composing multiple prompts together. prompts.prompt.PromptTemplate Schema to represent a prompt for an LLM. Functions¶ prompts.base.check_valid_template(template, ...) Check that template string is valid. prompts.base.jinja2_formatter(template, **kwargs) Format a template using jinja2. prompts.base.validate_jinja2(template, ...) Validate that the input variables are valid for the template.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-46
Validate that the input variables are valid for the template. prompts.example_selector.ngram_overlap.ngram_overlap_score(...) Compute ngram overlap score of source and example as sentence_bleu score. prompts.example_selector.semantic_similarity.sorted_values(values) Return a list of values in dict sorted by key. prompts.loading.load_prompt(path) Unified method for loading a prompt from LangChainHub or local fs. prompts.loading.load_prompt_from_config(config) Load prompt from Config Dict. langchain.requests: Requests¶ Lightweight wrapper around requests library, with async support. Classes¶ requests.Requests Wrapper around requests to handle auth and async. requests.TextRequestsWrapper Lightweight wrapper around requests library. langchain.retrievers: Retrievers¶ Classes¶ retrievers.arxiv.ArxivRetriever It is effectively a wrapper for ArxivAPIWrapper. retrievers.azure_cognitive_search.AzureCognitiveSearchRetriever Wrapper around Azure Cognitive Search. retrievers.chatgpt_plugin_retriever.ChatGPTPluginRetriever Create a new model by parsing and validating input data from keyword arguments. retrievers.contextual_compression.ContextualCompressionRetriever Retriever that wraps a base retriever and compresses the results. retrievers.databerry.DataberryRetriever Retriever that uses the Databerry API. retrievers.docarray.DocArrayRetriever Retriever class for DocArray Document Indices. retrievers.docarray.SearchType(value[, ...]) Enumerator of the types of search to perform. retrievers.document_compressors.base.BaseDocumentCompressor Base abstraction interface for document compression. retrievers.document_compressors.base.DocumentCompressorPipeline Document compressor that uses a pipeline of transformers. retrievers.document_compressors.chain_extract.LLMChainExtractor
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-47
retrievers.document_compressors.chain_extract.LLMChainExtractor Create a new model by parsing and validating input data from keyword arguments. retrievers.document_compressors.chain_extract.NoOutputParser Parse outputs that could return a null string of some sort. retrievers.document_compressors.chain_filter.LLMChainFilter Filter that drops documents that aren't relevant to the query. retrievers.document_compressors.cohere_rerank.CohereRerank Create a new model by parsing and validating input data from keyword arguments. retrievers.document_compressors.embeddings_filter.EmbeddingsFilter Create a new model by parsing and validating input data from keyword arguments. retrievers.elastic_search_bm25.ElasticSearchBM25Retriever Wrapper around Elasticsearch using BM25 as a retrieval method. retrievers.kendra.AdditionalResultAttribute Create a new model by parsing and validating input data from keyword arguments. retrievers.kendra.AdditionalResultAttributeValue Create a new model by parsing and validating input data from keyword arguments. retrievers.kendra.AmazonKendraRetriever Retriever class to query documents from Amazon Kendra Index. retrievers.kendra.DocumentAttribute Create a new model by parsing and validating input data from keyword arguments. retrievers.kendra.DocumentAttributeValue Create a new model by parsing and validating input data from keyword arguments. retrievers.kendra.Highlight Create a new model by parsing and validating input data from keyword arguments. retrievers.kendra.QueryResult Create a new model by parsing and validating input data from keyword arguments. retrievers.kendra.QueryResultItem Create a new model by parsing and validating input data from keyword arguments. retrievers.kendra.RetrieveResult Create a new model by parsing and validating input data from keyword arguments. retrievers.kendra.RetrieveResultItem
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-48
retrievers.kendra.RetrieveResultItem Create a new model by parsing and validating input data from keyword arguments. retrievers.kendra.TextWithHighLights Create a new model by parsing and validating input data from keyword arguments. retrievers.knn.KNNRetriever KNN Retriever. retrievers.llama_index.LlamaIndexGraphRetriever Question-answering with sources over an LlamaIndex graph data structure. retrievers.llama_index.LlamaIndexRetriever Question-answering with sources over an LlamaIndex data structure. retrievers.merger_retriever.MergerRetriever This class merges the results of multiple retrievers. retrievers.metal.MetalRetriever Retriever that uses the Metal API. retrievers.milvus.MilvusRetriever Retriever that uses the Milvus API. retrievers.multi_query.LineList Create a new model by parsing and validating input data from keyword arguments. retrievers.multi_query.LineListOutputParser Create a new model by parsing and validating input data from keyword arguments. retrievers.multi_query.MultiQueryRetriever Given a user query, use an LLM to write a set of queries. retrievers.pinecone_hybrid_search.PineconeHybridSearchRetriever Create a new model by parsing and validating input data from keyword arguments. retrievers.pubmed.PubMedRetriever It is effectively a wrapper for PubMedAPIWrapper. retrievers.remote_retriever.RemoteLangChainRetriever Create a new model by parsing and validating input data from keyword arguments. retrievers.self_query.base.SelfQueryRetriever Retriever that wraps around a vector store and uses an LLM to generate the vector store queries.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-49
retrievers.self_query.chroma.ChromaTranslator() Logic for converting internal query language elements to valid filters. retrievers.self_query.myscale.MyScaleTranslator([...]) Logic for converting internal query language elements to valid filters. retrievers.self_query.pinecone.PineconeTranslator() Logic for converting internal query language elements to valid filters. retrievers.self_query.qdrant.QdrantTranslator(...) Logic for converting internal query language elements to valid filters. retrievers.self_query.weaviate.WeaviateTranslator() Logic for converting internal query language elements to valid filters. retrievers.svm.SVMRetriever SVM Retriever. retrievers.tfidf.TFIDFRetriever Create a new model by parsing and validating input data from keyword arguments. retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever Retriever combining embedding similarity with recency. retrievers.vespa_retriever.VespaRetriever Retriever that uses the Vespa. retrievers.weaviate_hybrid_search.WeaviateHybridSearchRetriever Retriever that uses Weaviate's hybrid search to retrieve documents. retrievers.wikipedia.WikipediaRetriever It is effectively a wrapper for WikipediaAPIWrapper. retrievers.zep.ZepRetriever A Retriever implementation for the Zep long-term memory store. retrievers.zilliz.ZillizRetriever Retriever that uses the Zilliz API. Functions¶ retrievers.document_compressors.chain_extract.default_get_input(...) Return the compression chain input. retrievers.document_compressors.chain_filter.default_get_input(...) Return the compression chain input. retrievers.kendra.clean_excerpt(excerpt) Cleans an excerpt from Kendra.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-50
retrievers.kendra.clean_excerpt(excerpt) Cleans an excerpt from Kendra. retrievers.kendra.combined_text(title, excerpt) Combines a title and an excerpt into a single string. retrievers.knn.create_index(contexts, embeddings) Create an index of embeddings for a list of contexts. retrievers.milvus.MilvusRetreiver(*args, ...) Deprecated MilvusRetreiver. retrievers.pinecone_hybrid_search.create_index(...) Create a Pinecone index from a list of contexts. retrievers.pinecone_hybrid_search.hash_text(text) Hash a text using SHA256. retrievers.self_query.myscale.DEFAULT_COMPOSER(op_name) Default composer for logical operators. retrievers.self_query.myscale.FUNCTION_COMPOSER(op_name) Composer for functions. retrievers.svm.create_index(contexts, embeddings) Create an index of embeddings for a list of contexts. retrievers.zilliz.ZillizRetreiver(*args, ...) Deprecated ZillizRetreiver. langchain.schema: Schema¶ Classes¶ schema.agent.AgentFinish(return_values, log) The final return value of an ActionAgent. schema.document.BaseDocumentTransformer() Abstract base class for document transformation systems. schema.document.Document Class for storing a piece of text and associated metadata. schema.memory.BaseChatMessageHistory() Abstract base class for storing chat message history. schema.memory.BaseMemory Base abstract class for memory in Chains. schema.messages.AIMessage A Message from an AI. schema.messages.BaseMessage The base abstract Message class. schema.messages.ChatMessage A Message that can be assigned an arbitrary speaker (i.e. schema.messages.FunctionMessage
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-51
A Message that can be assigned an arbitrary speaker (i.e. schema.messages.FunctionMessage A Message for passing the result of executing a function back to a model. schema.messages.HumanMessage A Message from a human. schema.messages.SystemMessage A Message for priming AI behavior, usually passed in as the first of a sequence of input messages. schema.output.ChatGeneration A single chat generation output. schema.output.ChatResult Class that contains all results for a single chat model call. schema.output.Generation A single text generation output. schema.output.LLMResult Class that contains all results for a batched LLM call. schema.output.RunInfo Class that contains metadata for a single execution of a Chain or model. schema.output_parser.BaseLLMOutputParser Abstract base class for parsing the outputs of a model. schema.output_parser.BaseOutputParser Class to parse the output of an LLM call. schema.output_parser.NoOpOutputParser 'No operation' OutputParser that returns the text as is. schema.output_parser.OutputParserException(error) Exception that output parsers should raise to signify a parsing error. schema.prompt.PromptValue Base abstract class for inputs to any language model. schema.prompt_template.BasePromptTemplate Base class for all prompt templates, returning a prompt. schema.retriever.BaseRetriever Abstract base class for a Document retrieval system. Functions¶ schema.messages.get_buffer_string(messages) Convert sequence of Messages to strings and concatenate them into one string. schema.messages.messages_from_dict(messages) Convert a sequence of messages from dicts to Message objects. schema.messages.messages_to_dict(messages) Convert a sequence of Messages to a list of dictionaries. schema.prompt_template.format_document(doc, ...) Format a document into a string based on a prompt template. langchain.server: Server¶
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-52
Format a document into a string based on a prompt template. langchain.server: Server¶ Script to run langchain-server locally using docker-compose. Functions¶ server.main() Run the langchain server locally. langchain.sql_database: Sql Database¶ SQLAlchemy wrapper around a database. Functions¶ sql_database.truncate_word(content, *, length) Truncate a string to a certain number of words, based on the max string length. langchain.text_splitter: Text Splitter¶ Functionality for splitting text. Classes¶ text_splitter.CharacterTextSplitter([separator]) Implementation of splitting text that looks at characters. text_splitter.HeaderType Header type as typed dict. text_splitter.Language(value[, names, ...]) Enum of the programming languages. text_splitter.LatexTextSplitter(**kwargs) Attempts to split the text along Latex-formatted layout elements. text_splitter.LineType Line type as typed dict. text_splitter.MarkdownTextSplitter(**kwargs) Attempts to split the text along Markdown-formatted headings. text_splitter.NLTKTextSplitter([separator]) Implementation of splitting text that looks at sentences using NLTK. text_splitter.PythonCodeTextSplitter(**kwargs) Attempts to split the text along Python syntax. text_splitter.RecursiveCharacterTextSplitter([...]) Implementation of splitting text that looks at characters. text_splitter.SentenceTransformersTokenTextSplitter([...]) Implementation of splitting text that looks at tokens. text_splitter.SpacyTextSplitter([separator, ...]) Implementation of splitting text that looks at sentences using Spacy. text_splitter.TextSplitter(chunk_size, ...) Interface for splitting text into chunks. text_splitter.TokenTextSplitter([...])
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-53
Interface for splitting text into chunks. text_splitter.TokenTextSplitter([...]) Implementation of splitting text that looks at tokens. Functions¶ text_splitter.split_text_on_tokens(*, text, ...) Split incoming text and return chunks. langchain.tools: Tools¶ Core toolkit implementations. Classes¶ tools.arxiv.tool.ArxivQueryRun Tool that adds the capability to search using the Arxiv API. tools.azure_cognitive_services.form_recognizer.AzureCogsFormRecognizerTool Tool that queries the Azure Cognitive Services Form Recognizer API. tools.azure_cognitive_services.image_analysis.AzureCogsImageAnalysisTool Tool that queries the Azure Cognitive Services Image Analysis API. tools.azure_cognitive_services.speech2text.AzureCogsSpeech2TextTool Tool that queries the Azure Cognitive Services Speech2Text API. tools.azure_cognitive_services.text2speech.AzureCogsText2SpeechTool Tool that queries the Azure Cognitive Services Text2Speech API. tools.base.BaseTool Interface LangChain tools must implement. tools.base.SchemaAnnotationError Raised when 'args_schema' is missing or has an incorrect type annotation. tools.base.StructuredTool Tool that can operate on any number of inputs. tools.base.Tool Tool that takes in function or coroutine directly. tools.base.ToolException An optional exception that tool throws when execution error occurs. tools.base.ToolMetaclass(name, bases, dct) Metaclass for BaseTool to ensure the provided args_schema tools.bing_search.tool.BingSearchResults Tool that has capability to query the Bing Search API and get back json. tools.bing_search.tool.BingSearchRun Tool that adds the capability to query the Bing search API. tools.brave_search.tool.BraveSearch Create a new model by parsing and validating input data from keyword arguments. tools.convert_to_openai.FunctionDescription
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-54
tools.convert_to_openai.FunctionDescription Representation of a callable function to the OpenAI API. tools.dataforseo_api_search.tool.DataForSeoAPISearchResults Tool that has capability to query the DataForSeo Google Search API and get back json. tools.dataforseo_api_search.tool.DataForSeoAPISearchRun Tool that adds the capability to query the DataForSeo Google search API. tools.ddg_search.tool.DuckDuckGoSearchResults Tool that queries the Duck Duck Go Search API and get back json. tools.ddg_search.tool.DuckDuckGoSearchRun Tool that adds the capability to query the DuckDuckGo search API. tools.file_management.copy.CopyFileTool Create a new model by parsing and validating input data from keyword arguments. tools.file_management.copy.FileCopyInput Input for CopyFileTool. tools.file_management.delete.DeleteFileTool Create a new model by parsing and validating input data from keyword arguments. tools.file_management.delete.FileDeleteInput Input for DeleteFileTool. tools.file_management.file_search.FileSearchInput Input for FileSearchTool. tools.file_management.file_search.FileSearchTool Create a new model by parsing and validating input data from keyword arguments. tools.file_management.list_dir.DirectoryListingInput Input for ListDirectoryTool. tools.file_management.list_dir.ListDirectoryTool Create a new model by parsing and validating input data from keyword arguments. tools.file_management.move.FileMoveInput Input for MoveFileTool. tools.file_management.move.MoveFileTool Create a new model by parsing and validating input data from keyword arguments. tools.file_management.read.ReadFileInput Input for ReadFileTool. tools.file_management.read.ReadFileTool Create a new model by parsing and validating input data from keyword arguments. tools.file_management.utils.BaseFileToolMixin Mixin for file system tools. tools.file_management.utils.FileValidationError
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-55
Mixin for file system tools. tools.file_management.utils.FileValidationError Error for paths outside the root directory. tools.file_management.write.WriteFileInput Input for WriteFileTool. tools.file_management.write.WriteFileTool Create a new model by parsing and validating input data from keyword arguments. tools.gmail.base.GmailBaseTool Create a new model by parsing and validating input data from keyword arguments. tools.gmail.create_draft.CreateDraftSchema Create a new model by parsing and validating input data from keyword arguments. tools.gmail.create_draft.GmailCreateDraft Create a new model by parsing and validating input data from keyword arguments. tools.gmail.get_message.GmailGetMessage Create a new model by parsing and validating input data from keyword arguments. tools.gmail.get_message.SearchArgsSchema Create a new model by parsing and validating input data from keyword arguments. tools.gmail.get_thread.GetThreadSchema Create a new model by parsing and validating input data from keyword arguments. tools.gmail.get_thread.GmailGetThread Create a new model by parsing and validating input data from keyword arguments. tools.gmail.search.GmailSearch Create a new model by parsing and validating input data from keyword arguments. tools.gmail.search.Resource(value[, names, ...]) Enumerator of Resources to search. tools.gmail.search.SearchArgsSchema Create a new model by parsing and validating input data from keyword arguments. tools.gmail.send_message.GmailSendMessage Create a new model by parsing and validating input data from keyword arguments. tools.gmail.send_message.SendMessageSchema Create a new model by parsing and validating input data from keyword arguments. tools.google_places.tool.GooglePlacesSchema Create a new model by parsing and validating input data from keyword arguments. tools.google_places.tool.GooglePlacesTool Tool that adds the capability to query the Google places API. tools.google_search.tool.GoogleSearchResults
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-56
tools.google_search.tool.GoogleSearchResults Tool that has capability to query the Google Search API and get back json. tools.google_search.tool.GoogleSearchRun Tool that adds the capability to query the Google search API. tools.google_serper.tool.GoogleSerperResults Tool that has capability to query the Serper.dev Google Search API and get back json. tools.google_serper.tool.GoogleSerperRun Tool that adds the capability to query the Serper.dev Google search API. tools.graphql.tool.BaseGraphQLTool Base tool for querying a GraphQL API. tools.human.tool.HumanInputRun Tool that adds the capability to ask user for input. tools.ifttt.IFTTTWebhook IFTTT Webhook. tools.jira.tool.JiraAction Create a new model by parsing and validating input data from keyword arguments. tools.json.tool.JsonGetValueTool Tool for getting a value in a JSON spec. tools.json.tool.JsonListKeysTool Tool for listing keys in a JSON spec. tools.json.tool.JsonSpec Base class for JSON spec. tools.metaphor_search.tool.MetaphorSearchResults Tool that has capability to query the Metaphor Search API and get back json. tools.office365.base.O365BaseTool Create a new model by parsing and validating input data from keyword arguments. tools.office365.create_draft_message.CreateDraftMessageSchema Create a new model by parsing and validating input data from keyword arguments. tools.office365.create_draft_message.O365CreateDraftMessage Create a new model by parsing and validating input data from keyword arguments. tools.office365.events_search.O365SearchEvents Class for searching calendar events in Office 365 tools.office365.events_search.SearchEventsInput Input for SearchEmails Tool. tools.office365.messages_search.O365SearchEmails Class for searching email messages in Office 365
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-57
Class for searching email messages in Office 365 tools.office365.messages_search.SearchEmailsInput Input for SearchEmails Tool. tools.office365.send_event.O365SendEvent Create a new model by parsing and validating input data from keyword arguments. tools.office365.send_event.SendEventSchema Input for CreateEvent Tool. tools.office365.send_message.O365SendMessage Create a new model by parsing and validating input data from keyword arguments. tools.office365.send_message.SendMessageSchema Create a new model by parsing and validating input data from keyword arguments. tools.openapi.utils.api_models.APIOperation A model for a single API operation. tools.openapi.utils.api_models.APIProperty A model for a property in the query, path, header, or cookie params. tools.openapi.utils.api_models.APIPropertyBase Base model for an API property. tools.openapi.utils.api_models.APIPropertyLocation(value) The location of the property. tools.openapi.utils.api_models.APIRequestBody A model for a request body. tools.openapi.utils.api_models.APIRequestBodyProperty A model for a request body property. tools.openweathermap.tool.OpenWeatherMapQueryRun Tool that adds the capability to query using the OpenWeatherMap API. tools.playwright.base.BaseBrowserTool Base class for browser tools. tools.playwright.click.ClickTool Create a new model by parsing and validating input data from keyword arguments. tools.playwright.click.ClickToolInput Input for ClickTool. tools.playwright.current_page.CurrentWebPageTool Create a new model by parsing and validating input data from keyword arguments. tools.playwright.extract_hyperlinks.ExtractHyperlinksTool Extract all hyperlinks on the page. tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput Input for ExtractHyperlinksTool. tools.playwright.extract_text.ExtractTextTool Create a new model by parsing and validating input data from keyword arguments.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-58
Create a new model by parsing and validating input data from keyword arguments. tools.playwright.get_elements.GetElementsTool Create a new model by parsing and validating input data from keyword arguments. tools.playwright.get_elements.GetElementsToolInput Input for GetElementsTool. tools.playwright.navigate.NavigateTool Create a new model by parsing and validating input data from keyword arguments. tools.playwright.navigate.NavigateToolInput Input for NavigateToolInput. tools.playwright.navigate_back.NavigateBackTool Navigate back to the previous page in the browser history. tools.plugin.AIPlugin AI Plugin Definition. tools.plugin.AIPluginTool Create a new model by parsing and validating input data from keyword arguments. tools.plugin.AIPluginToolSchema AIPLuginToolSchema. tools.plugin.ApiConfig Create a new model by parsing and validating input data from keyword arguments. tools.powerbi.tool.InfoPowerBITool Tool for getting metadata about a PowerBI Dataset. tools.powerbi.tool.ListPowerBITool Tool for getting tables names. tools.powerbi.tool.QueryPowerBITool Tool for querying a Power BI Dataset. tools.pubmed.tool.PubmedQueryRun Tool that adds the capability to search using the PubMed API. tools.python.tool.PythonAstREPLTool A tool for running python code in a REPL. tools.python.tool.PythonREPLTool A tool for running python code in a REPL. tools.requests.tool.BaseRequestsTool Base class for requests tools. tools.requests.tool.RequestsDeleteTool Tool for making a DELETE request to an API endpoint. tools.requests.tool.RequestsGetTool Tool for making a GET request to an API endpoint. tools.requests.tool.RequestsPatchTool Tool for making a PATCH request to an API endpoint. tools.requests.tool.RequestsPostTool Tool for making a POST request to an API endpoint. tools.requests.tool.RequestsPutTool
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-59
Tool for making a POST request to an API endpoint. tools.requests.tool.RequestsPutTool Tool for making a PUT request to an API endpoint. tools.scenexplain.tool.SceneXplainInput Input for SceneXplain. tools.scenexplain.tool.SceneXplainTool Tool that adds the capability to explain images. tools.searx_search.tool.SearxSearchResults Tool that has the capability to query a Searx instance and get back json. tools.searx_search.tool.SearxSearchRun Tool that adds the capability to query a Searx instance. tools.shell.tool.ShellInput Commands for the Bash Shell tool. tools.shell.tool.ShellTool Tool to run shell commands. tools.sleep.tool.SleepInput Input for CopyFileTool. tools.sleep.tool.SleepTool Tool that adds the capability to sleep. tools.spark_sql.tool.BaseSparkSQLTool Base tool for interacting with Spark SQL. tools.spark_sql.tool.InfoSparkSQLTool Tool for getting metadata about a Spark SQL. tools.spark_sql.tool.ListSparkSQLTool Tool for getting tables names. tools.spark_sql.tool.QueryCheckerTool Use an LLM to check if a query is correct. tools.spark_sql.tool.QuerySparkSQLTool Tool for querying a Spark SQL. tools.sql_database.tool.BaseSQLDatabaseTool Base tool for interacting with a SQL database. tools.sql_database.tool.InfoSQLDatabaseTool Tool for getting metadata about a SQL database. tools.sql_database.tool.ListSQLDatabaseTool Tool for getting tables names. tools.sql_database.tool.QuerySQLCheckerTool Use an LLM to check if a query is correct. tools.sql_database.tool.QuerySQLDataBaseTool Tool for querying a SQL database. tools.steamship_image_generation.tool.ModelName(value) Supported Image Models for generation. tools.steamship_image_generation.tool.SteamshipImageGenerationTool
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-60
tools.steamship_image_generation.tool.SteamshipImageGenerationTool Tool used to generate images from a text-prompt. tools.vectorstore.tool.BaseVectorStoreTool Base class for tools that use a VectorStore. tools.vectorstore.tool.VectorStoreQATool Tool for the VectorDBQA chain. tools.vectorstore.tool.VectorStoreQAWithSourcesTool Tool for the VectorDBQAWithSources chain. tools.wikipedia.tool.WikipediaQueryRun Tool that adds the capability to search using the Wikipedia API. tools.wolfram_alpha.tool.WolframAlphaQueryRun Tool that adds the capability to query using the Wolfram Alpha SDK. tools.youtube.search.YouTubeSearchTool Create a new model by parsing and validating input data from keyword arguments. tools.zapier.tool.ZapierNLAListActions Returns a list of all exposed (enabled) actions associated with tools.zapier.tool.ZapierNLARunAction Executes an action that is identified by action_id, must be exposed Functions¶ tools.azure_cognitive_services.utils.detect_file_src_type(...) Detect if the file is local or remote. tools.azure_cognitive_services.utils.download_audio_from_url(...) Download audio from url to local. tools.base.create_schema_from_function(...) Create a pydantic schema from a function's signature. tools.base.tool(*args[, return_direct, ...]) Make tools out of functions, can be used with or without arguments. tools.convert_to_openai.format_tool_to_openai_function(tool) Format tool into the OpenAI function API. tools.ddg_search.tool.DuckDuckGoSearchTool(...) Deprecated. tools.file_management.utils.get_validated_relative_path(...) Resolve a relative path, raising an error if not within the root directory. tools.file_management.utils.is_relative_to(...) Check if path is relative to root.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-61
tools.file_management.utils.is_relative_to(...) Check if path is relative to root. tools.gmail.utils.build_resource_service([...]) Build a Gmail service. tools.gmail.utils.clean_email_body(body) Clean email body. tools.gmail.utils.get_gmail_credentials([...]) Get credentials. tools.gmail.utils.import_google() Import google libraries. tools.gmail.utils.import_googleapiclient_resource_builder() Import googleapiclient.discovery.build function. tools.gmail.utils.import_installed_app_flow() Import InstalledAppFlow class. tools.interaction.tool.StdInInquireTool(...) Tool for asking the user for input. tools.office365.utils.authenticate() Authenticate using the Microsoft Grah API tools.office365.utils.clean_body(body) Clean body of a message or event. tools.playwright.base.lazy_import_playwright_browsers() Lazy import playwright browsers. tools.playwright.utils.create_async_playwright_browser([...]) Create an async playwright browser. tools.playwright.utils.create_sync_playwright_browser([...]) Create a playwright browser. tools.playwright.utils.get_current_page(browser) Get the current page of the browser. tools.playwright.utils.run_async(coro) Run an async coroutine. tools.plugin.marshal_spec(txt) Convert the yaml or json serialized spec to a dict. tools.python.tool.sanitize_input(query) Sanitize input to the python REPL. tools.steamship_image_generation.utils.make_image_public(...) Upload a block to a signed URL and return the public URL. langchain.utilities: Utilities¶ General utilities. Classes¶ utilities.apify.ApifyWrapper Wrapper around Apify. utilities.arxiv.ArxivAPIWrapper Wrapper around ArxivAPI. utilities.awslambda.LambdaWrapper Wrapper for AWS Lambda SDK. utilities.bibtex.BibtexparserWrapper Wrapper around bibtexparser.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-62
utilities.bibtex.BibtexparserWrapper Wrapper around bibtexparser. utilities.bing_search.BingSearchAPIWrapper Wrapper for Bing Search API. utilities.brave_search.BraveSearchWrapper Create a new model by parsing and validating input data from keyword arguments. utilities.dataforseo_api_search.DataForSeoAPIWrapper Create a new model by parsing and validating input data from keyword arguments. utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper Wrapper for DuckDuckGo Search API. utilities.google_places_api.GooglePlacesAPIWrapper Wrapper around Google Places API. utilities.google_search.GoogleSearchAPIWrapper Wrapper for Google Search API. utilities.google_serper.GoogleSerperAPIWrapper Wrapper around the Serper.dev Google Search API. utilities.graphql.GraphQLAPIWrapper Wrapper around GraphQL API. utilities.jira.JiraAPIWrapper Wrapper for Jira API. utilities.metaphor_search.MetaphorSearchAPIWrapper Wrapper for Metaphor Search API. utilities.openapi.HTTPVerb(value[, names, ...]) Enumerator of the HTTP verbs. utilities.openapi.OpenAPISpec OpenAPI Model that removes misformatted parts of the spec. utilities.openweathermap.OpenWeatherMapAPIWrapper Wrapper for OpenWeatherMap API using PyOWM. utilities.powerbi.PowerBIDataset Create PowerBI engine from dataset ID and credential or token. utilities.pupmed.PubMedAPIWrapper Wrapper around PubMed API. utilities.python.PythonREPL Simulates a standalone Python REPL. utilities.scenexplain.SceneXplainAPIWrapper Wrapper for SceneXplain API. utilities.searx_search.SearxResults(data) Dict like wrapper around search api results. utilities.searx_search.SearxSearchWrapper Wrapper for Searx API. utilities.serpapi.SerpAPIWrapper
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-63
Wrapper for Searx API. utilities.serpapi.SerpAPIWrapper Wrapper around SerpAPI. utilities.twilio.TwilioAPIWrapper Messaging Client using Twilio. utilities.wikipedia.WikipediaAPIWrapper Wrapper around WikipediaAPI. utilities.wolfram_alpha.WolframAlphaAPIWrapper Wrapper for Wolfram Alpha. utilities.zapier.ZapierNLAWrapper Wrapper for Zapier NLA. Functions¶ utilities.loading.try_load_from_hub(path, ...) Load configuration from hub. utilities.powerbi.fix_table_name(table) Add single quotes around table names that contain spaces. utilities.powerbi.json_to_md(json_contents) Converts a JSON object to a markdown table. utilities.vertexai.init_vertexai([project, ...]) Init vertexai. utilities.vertexai.raise_vertex_import_error() Raise ImportError related to Vertex SDK being not available. langchain.utils: Utils¶ Generic utility functions. Functions¶ utils.check_package_version(package[, ...]) Check the version of a package. utils.comma_list(items) utils.get_from_dict_or_env(data, key, env_key) Get a value from a dictionary or an environment variable. utils.get_from_env(key, env_key[, default]) Get a value from a dictionary or an environment variable. utils.guard_import(module_name, *[, ...]) Dynamically imports a module and raises a helpful exception if the module is not installed. utils.mock_now(dt_value) Context manager for mocking out datetime.now() in unit tests. utils.raise_for_status_with_text(response) Raise an error with the response text. utils.stringify_dict(data) Stringify a dictionary. utils.stringify_value(val) Stringify a value. utils.xor_args(*arg_groups) Validate specified keyword args are mutually exclusive.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-64
utils.xor_args(*arg_groups) Validate specified keyword args are mutually exclusive. langchain.vectorstores: Vectorstores¶ Wrappers on top of vector stores. Classes¶ vectorstores.alibabacloud_opensearch.AlibabaCloudOpenSearch(...) Alibaba Cloud OpenSearch Vector Store vectorstores.analyticdb.AnalyticDB(...[, ...]) VectorStore implementation using AnalyticDB. vectorstores.annoy.Annoy(embedding_function, ...) Wrapper around Annoy vector database. vectorstores.atlas.AtlasDB(name[, ...]) Wrapper around Atlas: Nomic's neural database and rhizomatic instrument. vectorstores.awadb.AwaDB([table_name, ...]) Interface implemented by AwaDB vector stores. vectorstores.azuresearch.AzureSearch(...[, ...]) Initialize with necessary components. vectorstores.azuresearch.AzureSearchVectorStoreRetriever Create a new model by parsing and validating input data from keyword arguments. vectorstores.base.VectorStore() Interface for vector stores. vectorstores.base.VectorStoreRetriever Create a new model by parsing and validating input data from keyword arguments. vectorstores.cassandra.Cassandra(embedding, ...) Wrapper around Cassandra embeddings platform. vectorstores.chroma.Chroma([...]) Wrapper around ChromaDB embeddings platform. vectorstores.clarifai.Clarifai([user_id, ...]) Wrapper around Clarifai AI platform's vector store. vectorstores.clickhouse.Clickhouse(embedding) Wrapper around ClickHouse vector database vectorstores.clickhouse.ClickhouseSettings ClickHouse Client Configuration vectorstores.deeplake.DeepLake([...]) Wrapper around Deep Lake, a data lake for deep learning applications. vectorstores.docarray.base.DocArrayIndex(...) Initialize a vector store from DocArray's DocIndex.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-65
Initialize a vector store from DocArray's DocIndex. vectorstores.docarray.hnsw.DocArrayHnswSearch(...) Wrapper around HnswLib storage. vectorstores.docarray.in_memory.DocArrayInMemorySearch(...) Wrapper around in-memory storage for exact search. vectorstores.elastic_vector_search.ElasticKnnSearch(...) A class for performing k-Nearest Neighbors (k-NN) search on an Elasticsearch index. vectorstores.elastic_vector_search.ElasticVectorSearch(...) Wrapper around Elasticsearch as a vector database. vectorstores.faiss.FAISS(embedding_function, ...) Wrapper around FAISS vector database. vectorstores.hologres.Hologres(...[, ndims, ...]) VectorStore implementation using Hologres. vectorstores.lancedb.LanceDB(connection, ...) Wrapper around LanceDB vector database. vectorstores.marqo.Marqo(client, index_name) Wrapper around Marqo database. vectorstores.matching_engine.MatchingEngine(...) Vertex Matching Engine implementation of the vector store. vectorstores.milvus.Milvus(embedding_function) Initialize wrapper around the milvus vector database. vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch(...) Wrapper around MongoDB Atlas Vector Search. vectorstores.myscale.MyScale(embedding[, config]) Wrapper around MyScale vector database vectorstores.myscale.MyScaleSettings MyScale Client Configuration vectorstores.opensearch_vector_search.OpenSearchVectorSearch(...) Wrapper around OpenSearch as a vector database. vectorstores.pgembedding.BaseModel(**kwargs) A simple constructor that allows initialization from kwargs. vectorstores.pgembedding.CollectionStore(...) A simple constructor that allows initialization from kwargs. vectorstores.pgembedding.EmbeddingStore(**kwargs) A simple constructor that allows initialization from kwargs.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-66
A simple constructor that allows initialization from kwargs. vectorstores.pgembedding.PGEmbedding(...[, ...]) VectorStore implementation using Postgres and the pg_embedding extension. pg_embedding uses sequential scan by default. but you can create a HNSW index using the create_hnsw_index method. - connection_string is a postgres connection string. - embedding_function any embedding function implementing langchain.embeddings.base.Embeddings interface. - collection_name is the name of the collection to use. (default: langchain) - NOTE: This is not the name of the table, but the name of the collection. The tables will be created when initializing the store (if not exists) So, make sure the user has the right permissions to create tables. - distance_strategy is the distance strategy to use. (default: EUCLIDEAN) - EUCLIDEAN is the euclidean distance. - pre_delete_collection if True, will delete the collection if it exists. (default: False) - Useful for testing. vectorstores.pinecone.Pinecone(index, ...[, ...]) Wrapper around Pinecone vector database. vectorstores.qdrant.Qdrant(client, ...[, ...]) Wrapper around Qdrant vector database. vectorstores.redis.Redis(redis_url, ...) Wrapper around Redis vector database. vectorstores.redis.RedisVectorStoreRetriever Create a new model by parsing and validating input data from keyword arguments. vectorstores.rocksetdb.Rockset(client, ...) Wrapper arpund Rockset vector database. vectorstores.singlestoredb.DistanceStrategy(value) Enumerator of the Distance strategies for SingleStoreDB. vectorstores.singlestoredb.SingleStoreDB(...) This class serves as a Pythonic interface to the SingleStore DB database. vectorstores.singlestoredb.SingleStoreDBRetriever Retriever for SingleStoreDB vector stores.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-67
Retriever for SingleStoreDB vector stores. vectorstores.sklearn.BaseSerializer(persist_path) Abstract base class for saving and loading data. vectorstores.sklearn.BsonSerializer(persist_path) Serializes data in binary json using the bson python package. vectorstores.sklearn.JsonSerializer(persist_path) Serializes data in json using the json package from python standard library. vectorstores.sklearn.ParquetSerializer(...) Serializes data in Apache Parquet format using the pyarrow package. vectorstores.sklearn.SKLearnVectorStore(...) A simple in-memory vector store based on the scikit-learn library NearestNeighbors implementation. vectorstores.sklearn.SKLearnVectorStoreException Exception raised by SKLearnVectorStore. vectorstores.starrocks.StarRocks(embedding) Wrapper around StarRocks vector database vectorstores.starrocks.StarRocksSettings StarRocks Client Configuration vectorstores.supabase.SupabaseVectorStore(...) VectorStore for a Supabase postgres database. vectorstores.tair.Tair(embedding_function, ...) Wrapper around Tair Vector store. vectorstores.tigris.Tigris(client, ...) Initialize Tigris vector store vectorstores.typesense.Typesense(...[, ...]) Wrapper around Typesense vector search. vectorstores.vectara.Vectara([...]) Implementation of Vector Store using Vectara. vectorstores.vectara.VectaraRetriever Create a new model by parsing and validating input data from keyword arguments. vectorstores.weaviate.Weaviate(client, ...) Wrapper around Weaviate vector database. vectorstores.zilliz.Zilliz(embedding_function) Initialize wrapper around the Zilliz vector database. Functions¶ vectorstores.alibabacloud_opensearch.create_metadata(fields) Create metadata from fields.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
493969d3cea8-68
vectorstores.alibabacloud_opensearch.create_metadata(fields) Create metadata from fields. vectorstores.annoy.dependable_annoy_import() Import annoy if available, otherwise raise error. vectorstores.clickhouse.has_mul_sub_str(s, *args) Check if a string contains multiple substrings. vectorstores.faiss.dependable_faiss_import([...]) Import faiss if available, otherwise raise error. vectorstores.myscale.has_mul_sub_str(s, *args) Check if a string contains multiple substrings. vectorstores.starrocks.debug_output(s) Print a debug message if DEBUG is True. vectorstores.starrocks.get_named_result(...) Get a named result from a query. vectorstores.starrocks.has_mul_sub_str(s, *args) Check if a string has multiple substrings. vectorstores.utils.maximal_marginal_relevance(...) Calculate maximal marginal relevance.
rtdocs\api.python.langchain.com\en\latest\api_reference.html
9490b74704e2-0
langchain.agents.agent.Agent¶ class langchain.agents.agent.Agent(*, llm_chain: LLMChain, output_parser: AgentOutputParser, allowed_tools: Optional[List[str]] = None)[source]¶ Bases: BaseSingleActionAgent Class responsible for calling the language model and deciding the action. This is driven by an LLMChain. The prompt in the LLMChain MUST include a variable called “agent_scratchpad” where the agent can put its intermediary work. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param allowed_tools: Optional[List[str]] = None¶ param llm_chain: langchain.chains.llm.LLMChain [Required]¶ param output_parser: langchain.agents.agent.AgentOutputParser [Required]¶ async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. abstract classmethod create_prompt(tools: Sequence[BaseTool]) → BasePromptTemplate[source]¶ Create a prompt for this class. dict(**kwargs: Any) → Dict[source]¶ Return dictionary representation of agent. classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, **kwargs: Any) → Agent[source]¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.Agent.html
9490b74704e2-1
Construct an agent from an LLM and tools. get_allowed_tools() → Optional[List[str]][source]¶ get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → Dict[str, Any][source]¶ Create the full inputs for the LLMChain from intermediate steps. plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish[source]¶ Return response when agent has been stopped due to max iterations. save(file_path: Union[Path, str]) → None¶ Save the agent. Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict[source]¶ validator validate_prompt  »  all fields[source]¶ Validate that prompt matches format. abstract property llm_prefix: str¶ Prefix to append the LLM call with. abstract property observation_prefix: str¶ Prefix to append the observation with. property return_values: List[str]¶ Return values of the agent.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.Agent.html
b305acf0700a-0
langchain.agents.agent.AgentExecutor¶ class langchain.agents.agent.AgentExecutor(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False)[source]¶ Bases: Chain Consists of an agent using tools. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]¶ The agent to run for creating a plan and determining actions to take at each step of the execution loop. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param early_stopping_method: str = 'force'¶ The method to use for early stopping if the agent never returns AgentFinish. Either ‘force’ or ‘generate’.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.AgentExecutor.html
b305acf0700a-1
returns AgentFinish. Either ‘force’ or ‘generate’. “force” returns a string saying that it stopped because it met atime or iteration limit. “generate” calls the agent’s LLM Chain one final time to generatea final answer based on the previous steps. param handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False¶ How to handle errors raised by the agent’s output parser.Defaults to False, which raises the error. sIf true, the error will be sent back to the LLM as an observation. If a string, the string itself will be sent to the LLM as an observation. If a callable function, the function will be called with the exception as an argument, and the result of that function will be passed to the agentas an observation. param max_execution_time: Optional[float] = None¶ The maximum amount of wall clock time to spend in the execution loop. param max_iterations: Optional[int] = 15¶ The maximum number of steps to take before ending the execution loop. Setting to ‘None’ could lead to an infinite loop. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the chain. Defaults to None This metadata will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.AgentExecutor.html
b305acf0700a-2
You can use these to eg identify a specific instance of a chain with its use case. param return_intermediate_steps: bool = False¶ Whether to return the agent’s trajectory of intermediate steps at the end in addition to the final output. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param tools: Sequence[BaseTool] [Required]¶ The valid tools the agent can call. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.AgentExecutor.html
b305acf0700a-3
addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Asynchronously execute the chain. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. return_only_outputs – Whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.AgentExecutor.html
b305acf0700a-4
these runtime tags will propagate to calls to other objects. metadata – Optional metadata associated with the chain. Defaults to None include_run_info – Whether to include run info in the response. Defaults to False. Returns A dict of named outputs. Should contain all outputs specified inChain.output_keys. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.AgentExecutor.html
b305acf0700a-5
these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: await chain.arun("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." await chain.arun(question=question, context=context) # -> "The temperature in Boise is..." dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. Expects Chain._chain_type property to be implemented and for memory to benull. Parameters **kwargs – Keyword arguments passed to default pydantic.BaseModel.dict method. Returns A dictionary representation of the chain. Example ..code-block:: python chain.dict(exclude_unset=True) # -> {“_type”: “foo”, “verbose”: False, …} classmethod from_agent_and_tools(agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) → AgentExecutor[source]¶ Create from agent and tools. lookup_tool(name: str) → BaseTool[source]¶ Lookup tool by name. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prepare chain inputs, including adding inputs from memory. Parameters inputs – Dictionary of raw inputs, or single input if chain expects
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.AgentExecutor.html
b305acf0700a-6
Parameters inputs – Dictionary of raw inputs, or single input if chain expects only one param. Should contain all inputs specified in Chain.input_keys except for inputs that will be set by the chain’s memory. Returns A dictionary of all inputs, including those added by the chain’s memory. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prepare chain outputs, and save info about this run to memory. Parameters inputs – Dictionary of chain inputs, including any inputs added by chain memory. outputs – Dictionary of initial chain outputs. return_only_outputs – Whether to only return the chain outputs. If False, inputs are also added to the final outputs. Returns A dict of the final chain outputs. validator raise_callback_manager_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Convenience method for executing chain when there’s a single string output. The main difference between this method and Chain.__call__ is that this methodcan only be used for chains that return a single string output. If a Chain has more outputs, a non-string output, or you want to return the inputs/run info along with the outputs, use Chain.__call__. The other difference is that this method expects inputs to be passed directly in as positional arguments or keyword arguments, whereas Chain.__call__ expects a single input dictionary with all the inputs. Parameters *args – If the chain expects a single input, it can be passed in as the sole positional argument.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.AgentExecutor.html
b305acf0700a-7
sole positional argument. callbacks – Callbacks to use for this chain run. These will be called in addition to callbacks passed to the chain during construction, but only these runtime callbacks will propagate to calls to other objects. tags – List of string tags to pass to all callbacks. These will be passed in addition to tags passed to the chain during construction, but only these runtime tags will propagate to calls to other objects. **kwargs – If the chain expects multiple inputs, they can be passed in directly as keyword arguments. Returns The chain output as a string. Example # Suppose we have a single-input chain that takes a 'question' string: chain.run("What's the temperature in Boise, Idaho?") # -> "The temperature in Boise is..." # Suppose we have a multi-input chain that takes a 'question' string # and 'context' string: question = "What's the temperature in Boise, Idaho?" context = "Weather report for Boise, Idaho on 07/03/23..." chain.run(question=question, context=context) # -> "The temperature in Boise is..." save(file_path: Union[Path, str]) → None[source]¶ Raise error - saving not supported for Agent Executors. save_agent(file_path: Union[Path, str]) → None[source]¶ Save the underlying agent. validator set_verbose  »  verbose¶ Set the chain verbosity. Defaults to the global setting if not specified by the user. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_return_direct_tool  »  all fields[source]¶ Validate that tools are compatible with agent. validator validate_tools  »  all fields[source]¶ Validate that tools are compatible with agent. property lc_attributes: Dict¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.AgentExecutor.html
b305acf0700a-8
Validate that tools are compatible with agent. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.AgentExecutor.html
66247dff494a-0
langchain.agents.agent.AgentOutputParser¶ class langchain.agents.agent.AgentOutputParser[source]¶ Bases: BaseOutputParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. abstract parse(text: str) → Union[AgentAction, AgentFinish][source]¶ Parse text into agent action/finish. parse_result(result: List[Generation]) → T¶ Parse a list of candidate model Generations into a specific format. The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation. Parameters result – A list of Generations to be parsed. The Generations are assumed to be different candidate outputs for a single model input. Returns Structured output. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Parse the output of an LLM call with the input prompt for context. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – String output of language model. prompt – Input PromptValue. Returns Structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.AgentOutputParser.html
66247dff494a-1
property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.AgentOutputParser.html
0fb9a1466f42-0
langchain.agents.agent.BaseMultiActionAgent¶ class langchain.agents.agent.BaseMultiActionAgent[source]¶ Bases: BaseModel Base Agent class. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. abstract async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[List[AgentAction], AgentFinish][source]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Actions specifying what tool to use. dict(**kwargs: Any) → Dict[source]¶ Return dictionary representation of agent. get_allowed_tools() → Optional[List[str]][source]¶ abstract plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[List[AgentAction], AgentFinish][source]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Actions specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish[source]¶ Return response when agent has been stopped due to max iterations. save(file_path: Union[Path, str]) → None[source]¶ Save the agent. Parameters file_path – Path to file to save the agent to. Example:
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.BaseMultiActionAgent.html
0fb9a1466f42-1
Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict[source]¶ property return_values: List[str]¶ Return values of the agent.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.BaseMultiActionAgent.html
246f38ec671d-0
langchain.agents.agent.BaseSingleActionAgent¶ class langchain.agents.agent.BaseSingleActionAgent[source]¶ Bases: BaseModel Base Agent class. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. abstract async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. dict(**kwargs: Any) → Dict[source]¶ Return dictionary representation of agent. classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) → BaseSingleActionAgent[source]¶ get_allowed_tools() → Optional[List[str]][source]¶ abstract plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish[source]¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.BaseSingleActionAgent.html
246f38ec671d-1
Return response when agent has been stopped due to max iterations. save(file_path: Union[Path, str]) → None[source]¶ Save the agent. Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict[source]¶ property return_values: List[str]¶ Return values of the agent.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.BaseSingleActionAgent.html
2c5652af5959-0
langchain.agents.agent.ExceptionTool¶ class langchain.agents.agent.ExceptionTool(*, name: str = '_Exception', description: str = 'Exception tool', args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False)[source]¶ Bases: BaseTool Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param args_schema: Optional[Type[BaseModel]] = None¶ Pydantic model class to validate and parse the tool’s input arguments. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated. Please use callbacks instead. param callbacks: Callbacks = None¶ Callbacks to be called during tool execution. param description: str = 'Exception tool'¶ Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. param handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False¶ Handle the content of the ToolException thrown. param metadata: Optional[Dict[str, Any]] = None¶ Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param name: str = '_Exception'¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.ExceptionTool.html
2c5652af5959-1
param name: str = '_Exception'¶ The unique name of the tool that clearly communicates its purpose. param return_direct: bool = False¶ Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case. param verbose: bool = False¶ Whether to log the tool’s progress. __call__(tool_input: str, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → str¶ Make tool callable. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool asynchronously. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run the tool. property args: dict¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.ExceptionTool.html
2c5652af5959-2
Run the tool. property args: dict¶ property is_single_input: bool¶ Whether the tool only accepts a single input. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.ExceptionTool.html
c098e47a5604-0
langchain.agents.agent.LLMSingleActionAgent¶ class langchain.agents.agent.LLMSingleActionAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser, stop: List[str])[source]¶ Bases: BaseSingleActionAgent Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param llm_chain: langchain.chains.llm.LLMChain [Required]¶ param output_parser: langchain.agents.agent.AgentOutputParser [Required]¶ param stop: List[str] [Required]¶ async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. dict(**kwargs: Any) → Dict[source]¶ Return dictionary representation of agent. classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) → BaseSingleActionAgent¶ get_allowed_tools() → Optional[List[str]]¶ plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶ Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.LLMSingleActionAgent.html
c098e47a5604-1
Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶ Return response when agent has been stopped due to max iterations. save(file_path: Union[Path, str]) → None¶ Save the agent. Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict[source]¶ property return_values: List[str]¶ Return values of the agent.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent.LLMSingleActionAgent.html
f05dd761d0a5-0
langchain.agents.agent_toolkits.azure_cognitive_services.toolkit.AzureCognitiveServicesToolkit¶ class langchain.agents.agent_toolkits.azure_cognitive_services.toolkit.AzureCognitiveServicesToolkit[source]¶ Bases: BaseToolkit Toolkit for Azure Cognitive Services. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. get_tools() → List[BaseTool][source]¶ Get the tools in the toolkit.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent_toolkits.azure_cognitive_services.toolkit.AzureCognitiveServicesToolkit.html
c9ca8b5a4ee1-0
langchain.agents.agent_toolkits.base.BaseToolkit¶ class langchain.agents.agent_toolkits.base.BaseToolkit[source]¶ Bases: BaseModel, ABC Class representing a collection of related tools. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. abstract get_tools() → List[BaseTool][source]¶ Get the tools in the toolkit.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent_toolkits.base.BaseToolkit.html
6223d839e8b3-0
langchain.agents.agent_toolkits.csv.base.create_csv_agent¶ langchain.agents.agent_toolkits.csv.base.create_csv_agent(llm: BaseLanguageModel, path: Union[str, List[str]], pandas_kwargs: Optional[dict] = None, **kwargs: Any) → AgentExecutor[source]¶ Create csv agent by loading to a dataframe and using pandas agent.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent_toolkits.csv.base.create_csv_agent.html
eb103b9f1a65-0
langchain.agents.agent_toolkits.file_management.toolkit.FileManagementToolkit¶ class langchain.agents.agent_toolkits.file_management.toolkit.FileManagementToolkit(*, root_dir: Optional[str] = None, selected_tools: Optional[List[str]] = None)[source]¶ Bases: BaseToolkit Toolkit for interacting with a Local Files. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param root_dir: Optional[str] = None¶ If specified, all file operations are made relative to root_dir. param selected_tools: Optional[List[str]] = None¶ If provided, only provide the selected tools. Defaults to all. get_tools() → List[BaseTool][source]¶ Get the tools in the toolkit. validator validate_tools  »  all fields[source]¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent_toolkits.file_management.toolkit.FileManagementToolkit.html
c950dc21722c-0
langchain.agents.agent_toolkits.gmail.toolkit.GmailToolkit¶ class langchain.agents.agent_toolkits.gmail.toolkit.GmailToolkit(*, api_resource: Resource = None)[source]¶ Bases: BaseToolkit Toolkit for interacting with Gmail. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param api_resource: Resource [Optional]¶ get_tools() → List[BaseTool][source]¶ Get the tools in the toolkit. model Config[source]¶ Bases: object Pydantic config. arbitrary_types_allowed = True¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent_toolkits.gmail.toolkit.GmailToolkit.html
35b1ffa5a846-0
langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit¶ class langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit(*, tools: List[BaseTool] = [])[source]¶ Bases: BaseToolkit Jira Toolkit. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param tools: List[langchain.tools.base.BaseTool] = []¶ classmethod from_jira_api_wrapper(jira_api_wrapper: JiraAPIWrapper) → JiraToolkit[source]¶ get_tools() → List[BaseTool][source]¶ Get the tools in the toolkit.
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit.html
24a96c091141-0
langchain.agents.agent_toolkits.json.base.create_json_agent¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent_toolkits.json.base.create_json_agent.html
24a96c091141-1
langchain.agents.agent_toolkits.json.base.create_json_agent(llm: BaseLanguageModel, toolkit: JsonToolkit, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent_toolkits.json.base.create_json_agent.html
24a96c091141-2
to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix: str = 'Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → AgentExecutor[source]¶
rtdocs\api.python.langchain.com\en\latest\agents\langchain.agents.agent_toolkits.json.base.create_json_agent.html
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
6