id
stringlengths
14
15
text
stringlengths
22
2.51k
source
stringlengths
60
153
68a163638938-3
property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.stdout.StdOutCallbackHandler.html
bd71846cda40-0
langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler¶ class langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler[source]¶ Bases: AsyncCallbackHandler Callback handler that returns an async iterator. Methods __init__() aiter() on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id[, ...]) Run when chain ends running. on_chain_error(error, *, run_id[, ...]) Run when chain errors. on_chain_start(serialized, inputs, *, run_id) Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Run when LLM errors. on_llm_new_token(token, **kwargs) Run on new LLM token. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts running. on_retriever_end(documents, *, run_id[, ...]) Run on retriever end. on_retriever_error(error, *, run_id[, ...]) Run on retriever error. on_retriever_start(serialized, query, *, run_id) Run on retriever start. on_text(text, *, run_id[, parent_run_id, tags]) Run on arbitrary text. on_tool_end(output, *, run_id[, ...]) Run when tool ends running.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html
bd71846cda40-1
Run when tool ends running. on_tool_error(error, *, run_id[, ...]) Run when tool errors. on_tool_start(serialized, input_str, *, run_id) Run when tool starts running. Attributes always_verbose ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline queue done async aiter() → AsyncIterator[str][source]¶ async on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run on agent action. async on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run on agent end. async on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run when chain ends running. async on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run when chain errors.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html
bd71846cda40-2
Run when chain errors. async on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when chain starts running. async on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. async on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. async on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when LLM errors. async on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Run on new LLM token. Only available when streaming is enabled. async on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Run when LLM starts running. async on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run on retriever end. async on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html
bd71846cda40-3
Run on retriever error. async on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run on retriever start. async on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run on arbitrary text. async on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run when tool ends running. async on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run when tool errors. async on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when tool starts running. property always_verbose: bool¶ done: asyncio.locks.Event¶ property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html
bd71846cda40-4
property ignore_retriever: bool¶ Whether to ignore retriever callbacks. queue: asyncio.queues.Queue[str]¶ raise_error: bool = False¶ run_inline: bool = False¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html
92b819c57247-0
langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler¶ class langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler(*, answer_prefix_tokens: Optional[List[str]] = None, strip_tokens: bool = True, stream_prefix: bool = False)[source]¶ Bases: AsyncIteratorCallbackHandler Callback handler that returns an async iterator. Only the final output of the agent will be iterated. Instantiate AsyncFinalIteratorCallbackHandler. Parameters answer_prefix_tokens – Token sequence that prefixes the answer. Default is [“Final”, “Answer”, “:”] strip_tokens – Ignore white spaces and new lines when comparing answer_prefix_tokens to last tokens? (to determine if answer has been reached) stream_prefix – Should answer prefix itself also be streamed? Methods __init__(*[, answer_prefix_tokens, ...]) Instantiate AsyncFinalIteratorCallbackHandler. aiter() append_to_last_tokens(token) check_if_answer_reached() on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id[, ...]) Run when chain ends running. on_chain_error(error, *, run_id[, ...]) Run when chain errors. on_chain_start(serialized, inputs, *, run_id) Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Run when LLM errors. on_llm_new_token(token, **kwargs)
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html
92b819c57247-1
Run when LLM errors. on_llm_new_token(token, **kwargs) Run on new LLM token. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts running. on_retriever_end(documents, *, run_id[, ...]) Run on retriever end. on_retriever_error(error, *, run_id[, ...]) Run on retriever error. on_retriever_start(serialized, query, *, run_id) Run on retriever start. on_text(text, *, run_id[, parent_run_id, tags]) Run on arbitrary text. on_tool_end(output, *, run_id[, ...]) Run when tool ends running. on_tool_error(error, *, run_id[, ...]) Run when tool errors. on_tool_start(serialized, input_str, *, run_id) Run when tool starts running. Attributes always_verbose ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline async aiter() → AsyncIterator[str]¶ append_to_last_tokens(token: str) → None[source]¶ check_if_answer_reached() → bool[source]¶ async on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run on agent action.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html
92b819c57247-2
Run on agent action. async on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run on agent end. async on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run when chain ends running. async on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run when chain errors. async on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when chain starts running. async on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. async on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. async on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None¶ Run when LLM errors. async on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html
92b819c57247-3
Run on new LLM token. Only available when streaming is enabled. async on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Run when LLM starts running. async on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run on retriever end. async on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run on retriever error. async on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run on retriever start. async on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run on arbitrary text. async on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run when tool ends running. async on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶ Run when tool errors.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html
92b819c57247-4
Run when tool errors. async on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when tool starts running. property always_verbose: bool¶ done: asyncio.Event¶ property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. queue: asyncio.Queue[str]¶ raise_error: bool = False¶ run_inline: bool = False¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html
964b9f2dcecf-0
langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler¶ class langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler[source]¶ Bases: BaseCallbackHandler Callback handler for streaming. Only works with LLMs that support streaming. Methods __init__() on_agent_action(action, **kwargs) Run on agent action. on_agent_finish(finish, **kwargs) Run on agent end. on_chain_end(outputs, **kwargs) Run when chain ends running. on_chain_error(error, **kwargs) Run when chain errors. on_chain_start(serialized, inputs, **kwargs) Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Run when LLM errors. on_llm_new_token(token, **kwargs) Run on new LLM token. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts running. on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, **kwargs) Run on arbitrary text. on_tool_end(output, **kwargs) Run when tool ends running. on_tool_error(error, **kwargs) Run when tool errors. on_tool_start(serialized, input_str, **kwargs) Run when tool starts running. Attributes
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler.html
964b9f2dcecf-1
Run when tool starts running. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶ Run on agent action. on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain ends running. on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when chain errors. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain starts running. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when LLM errors. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Run on new LLM token. Only available when streaming is enabled.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler.html
964b9f2dcecf-2
Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Run when LLM starts running. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when Retriever starts running. on_text(text: str, **kwargs: Any) → None[source]¶ Run on arbitrary text. on_tool_end(output: str, **kwargs: Any) → None[source]¶ Run when tool ends running. on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when tool errors. on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶ Run when tool starts running. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler.html
964b9f2dcecf-3
Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler.html
6bb7969bc616-0
langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler¶ class langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler(*, answer_prefix_tokens: Optional[List[str]] = None, strip_tokens: bool = True, stream_prefix: bool = False)[source]¶ Bases: StreamingStdOutCallbackHandler Callback handler for streaming in agents. Only works with agents using LLMs that support streaming. Only the final output of the agent will be streamed. Instantiate FinalStreamingStdOutCallbackHandler. Parameters answer_prefix_tokens – Token sequence that prefixes the answer. Default is [“Final”, “Answer”, “:”] strip_tokens – Ignore white spaces and new lines when comparing answer_prefix_tokens to last tokens? (to determine if answer has been reached) stream_prefix – Should answer prefix itself also be streamed? Methods __init__(*[, answer_prefix_tokens, ...]) Instantiate FinalStreamingStdOutCallbackHandler. append_to_last_tokens(token) check_if_answer_reached() on_agent_action(action, **kwargs) Run on agent action. on_agent_finish(finish, **kwargs) Run on agent end. on_chain_end(outputs, **kwargs) Run when chain ends running. on_chain_error(error, **kwargs) Run when chain errors. on_chain_start(serialized, inputs, **kwargs) Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Run when LLM errors. on_llm_new_token(token, **kwargs) Run on new LLM token.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler.html
6bb7969bc616-1
on_llm_new_token(token, **kwargs) Run on new LLM token. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts running. on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, **kwargs) Run on arbitrary text. on_tool_end(output, **kwargs) Run when tool ends running. on_tool_error(error, **kwargs) Run when tool errors. on_tool_start(serialized, input_str, **kwargs) Run when tool starts running. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline append_to_last_tokens(token: str) → None[source]¶ check_if_answer_reached() → bool[source]¶ on_agent_action(action: AgentAction, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, **kwargs: Any) → None¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None¶ Run when chain ends running. on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None¶ Run when chain errors.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler.html
6bb7969bc616-2
Run when chain errors. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None¶ Run when chain starts running. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, **kwargs: Any) → None¶ Run when LLM ends running. on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None¶ Run when LLM errors. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Run when LLM starts running. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler.html
6bb7969bc616-3
Run when Retriever starts running. on_text(text: str, **kwargs: Any) → None¶ Run on arbitrary text. on_tool_end(output: str, **kwargs: Any) → None¶ Run when tool ends running. on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None¶ Run when tool errors. on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None¶ Run when tool starts running. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler.html
7d95b2b889b0-0
langchain.callbacks.streamlit.mutable_expander.ChildRecord¶ class langchain.callbacks.streamlit.mutable_expander.ChildRecord(type: ChildType, kwargs: Dict[str, Any], dg: DeltaGenerator)[source]¶ Bases: NamedTuple The child record as a NamedTuple. Create new instance of ChildRecord(type, kwargs, dg) Methods __init__() count(value, /) Return number of occurrences of value. index(value[, start, stop]) Return first index of value. Attributes dg Alias for field number 2 kwargs Alias for field number 1 type Alias for field number 0 count(value, /)¶ Return number of occurrences of value. index(value, start=0, stop=9223372036854775807, /)¶ Return first index of value. Raises ValueError if the value is not present. dg: DeltaGenerator¶ Alias for field number 2 kwargs: Dict[str, Any]¶ Alias for field number 1 type: ChildType¶ Alias for field number 0
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streamlit.mutable_expander.ChildRecord.html
7bbd86ca6517-0
langchain.callbacks.streamlit.mutable_expander.ChildType¶ class langchain.callbacks.streamlit.mutable_expander.ChildType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶ Bases: Enum The enumerator of the child type. Attributes MARKDOWN EXCEPTION EXCEPTION = 'EXCEPTION'¶ MARKDOWN = 'MARKDOWN'¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streamlit.mutable_expander.ChildType.html
1fc99b1d5c63-0
langchain.callbacks.streamlit.streamlit_callback_handler.LLMThoughtState¶ class langchain.callbacks.streamlit.streamlit_callback_handler.LLMThoughtState(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶ Bases: Enum Enumerator of the LLMThought state. Attributes THINKING RUNNING_TOOL COMPLETE COMPLETE = 'COMPLETE'¶ RUNNING_TOOL = 'RUNNING_TOOL'¶ THINKING = 'THINKING'¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streamlit.streamlit_callback_handler.LLMThoughtState.html
39b39df1ae98-0
langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler¶ class langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler(parent_container: DeltaGenerator, *, max_thought_containers: int = 4, expand_new_thoughts: bool = True, collapse_completed_thoughts: bool = True, thought_labeler: Optional[LLMThoughtLabeler] = None)[source]¶ Bases: BaseCallbackHandler A callback handler that writes to a Streamlit app. Create a StreamlitCallbackHandler instance. Parameters parent_container – The st.container that will contain all the Streamlit elements that the Handler creates. max_thought_containers – The max number of completed LLM thought containers to show at once. When this threshold is reached, a new thought will cause the oldest thoughts to be collapsed into a “History” expander. Defaults to 4. expand_new_thoughts – Each LLM “thought” gets its own st.expander. This param controls whether that expander is expanded by default. Defaults to True. collapse_completed_thoughts – If True, LLM thought expanders will be collapsed when completed. Defaults to True. thought_labeler – An optional custom LLMThoughtLabeler instance. If unspecified, the handler will use the default thought labeling logic. Defaults to None. Methods __init__(parent_container, *[, ...]) Create a StreamlitCallbackHandler instance. on_agent_action(action[, color]) Run on agent action. on_agent_finish(finish[, color]) Run on agent end. on_chain_end(outputs, **kwargs) Run when chain ends running. on_chain_error(error, **kwargs) Run when chain errors. on_chain_start(serialized, inputs, **kwargs) Run when chain starts running.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html
39b39df1ae98-1
Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Run when LLM errors. on_llm_new_token(token, **kwargs) Run on new LLM token. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts running. on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text[, color, end]) Run on arbitrary text. on_tool_end(output[, color, ...]) Run when tool ends running. on_tool_error(error, **kwargs) Run when tool errors. on_tool_start(serialized, input_str, **kwargs) Run when tool starts running. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline on_agent_action(action: AgentAction, color: Optional[str] = None, **kwargs: Any) → Any[source]¶ Run on agent action. on_agent_finish(finish: AgentFinish, color: Optional[str] = None, **kwargs: Any) → None[source]¶ Run on agent end.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html
39b39df1ae98-2
Run on agent end. on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain ends running. on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when chain errors. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain starts running. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when LLM errors. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Run when LLM starts running. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html
39b39df1ae98-3
Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when Retriever starts running. on_text(text: str, color: Optional[str] = None, end: str = '', **kwargs: Any) → None[source]¶ Run on arbitrary text. on_tool_end(output: str, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶ Run when tool ends running. on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when tool errors. on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶ Run when tool starts running. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html
8d8ad4caf64c-0
langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord¶ class langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord(name: str, input_str: str)[source]¶ Bases: NamedTuple The tool record as a NamedTuple. Create new instance of ToolRecord(name, input_str) Methods __init__() count(value, /) Return number of occurrences of value. index(value[, start, stop]) Return first index of value. Attributes input_str Alias for field number 1 name Alias for field number 0 count(value, /)¶ Return number of occurrences of value. index(value, start=0, stop=9223372036854775807, /)¶ Return first index of value. Raises ValueError if the value is not present. input_str: str¶ Alias for field number 1 name: str¶ Alias for field number 0
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord.html
27aef7cc318e-0
langchain.callbacks.streamlit.__init__.StreamlitCallbackHandler¶ langchain.callbacks.streamlit.__init__.StreamlitCallbackHandler(parent_container: DeltaGenerator, *, max_thought_containers: int = 4, expand_new_thoughts: bool = True, collapse_completed_thoughts: bool = True, thought_labeler: Optional[LLMThoughtLabeler] = None) → BaseCallbackHandler[source]¶ Construct a new StreamlitCallbackHandler. This CallbackHandler is geared towards use with a LangChain Agent; it displays the Agent’s LLM and tool-usage “thoughts” inside a series of Streamlit expanders. Parameters parent_container – The st.container that will contain all the Streamlit elements that the Handler creates. max_thought_containers – The max number of completed LLM thought containers to show at once. When this threshold is reached, a new thought will cause the oldest thoughts to be collapsed into a “History” expander. Defaults to 4. expand_new_thoughts – Each LLM “thought” gets its own st.expander. This param controls whether that expander is expanded by default. Defaults to True. collapse_completed_thoughts – If True, LLM thought expanders will be collapsed when completed. Defaults to True. thought_labeler – An optional custom LLMThoughtLabeler instance. If unspecified, the handler will use the default thought labeling logic. Defaults to None. Returns A new StreamlitCallbackHandler instance. Note that this is an “auto-updating” API (if the installed version of Streamlit) has a more recent StreamlitCallbackHandler implementation, an instance of that class will be used.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.streamlit.__init__.StreamlitCallbackHandler.html
7db5392fa51d-0
langchain.callbacks.tracers.base.BaseTracer¶ class langchain.callbacks.tracers.base.BaseTracer(**kwargs: Any)[source]¶ Bases: BaseCallbackHandler, ABC Base interface for tracers. Methods __init__(**kwargs) on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id])
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.base.BaseTracer.html
7db5392fa51d-1
on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None[source]¶ End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None[source]¶ Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶ Start a trace for a chain run.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.base.BaseTracer.html
7db5392fa51d-2
Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None[source]¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None[source]¶ Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None[source]¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶ Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None[source]¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None[source]¶ Run when Retriever errors.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.base.BaseTracer.html
7db5392fa51d-3
Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None[source]¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None[source]¶ Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶ Start a trace for a tool run. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.base.BaseTracer.html
92dc3ab54026-0
langchain.callbacks.tracers.base.TracerException¶ class langchain.callbacks.tracers.base.TracerException[source]¶ Bases: Exception Base class for exceptions in tracers module. add_note()¶ Exception.add_note(note) – add a note to the exception with_traceback()¶ Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. args¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.base.TracerException.html
b299c73be4f5-0
langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler¶ class langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler(evaluators: Sequence[RunEvaluator], max_workers: Optional[int] = None, client: Optional[LangChainPlusClient] = None, example_id: Optional[Union[UUID, str]] = None, skip_unfinished: bool = True, project_name: Optional[str] = None, **kwargs: Any)[source]¶ Bases: BaseTracer A tracer that runs a run evaluator whenever a run is persisted. Parameters evaluators (Sequence[RunEvaluator]) – The run evaluators to apply to all top level runs. max_workers (int, optional) – The maximum number of worker threads to use for running the evaluators. If not specified, it will default to the number of evaluators. client (LangChainPlusClient, optional) – The LangChainPlusClient instance to use for evaluating the runs. If not specified, a new instance will be created. example_id (Union[UUID, str], optional) – The example ID to be associated with the runs. project_name (str, optional) – The LangSmith project name to be organize eval chain runs under. example_id¶ The example ID associated with the runs. Type Union[UUID, None] client¶ The LangChainPlusClient instance used for evaluating the runs. Type LangChainPlusClient evaluators¶ The sequence of run evaluators to be executed. Type Sequence[RunEvaluator] executor¶ The thread pool executor used for running the evaluators. Type ThreadPoolExecutor futures¶ The set of futures representing the running evaluators. Type Set[Future] skip_unfinished¶ Whether to skip runs that are not finished or raised an error. Type bool project_name¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler.html
b299c73be4f5-1
an error. Type bool project_name¶ The LangSmith project name to be organize eval chain runs under. Type Optional[str] Methods __init__(evaluators[, max_workers, client, ...]) on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id])
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler.html
b299c73be4f5-2
on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. wait_for_futures() Wait for all futures to complete. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. name raise_error run_inline on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler.html
b299c73be4f5-3
Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever errors.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler.html
b299c73be4f5-4
Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a tool run. wait_for_futures() → None[source]¶ Wait for all futures to complete. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. name = 'evaluator_callback_handler'¶ raise_error: bool = False¶ run_inline: bool = False¶ run_map: Dict[str, Run]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.evaluation.EvaluatorCallbackHandler.html
c757f653e3c8-0
langchain.callbacks.tracers.langchain.LangChainTracer¶ class langchain.callbacks.tracers.langchain.LangChainTracer(example_id: Optional[Union[UUID, str]] = None, project_name: Optional[str] = None, client: Optional[LangChainPlusClient] = None, tags: Optional[List[str]] = None, **kwargs: Any)[source]¶ Bases: BaseTracer An implementation of the SharedTracer that POSTS to the langchain endpoint. Initialize the LangChain tracer. Methods __init__([example_id, project_name, client, ...]) Initialize the LangChain tracer. on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...) Start a trace for an LLM run. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.langchain.LangChainTracer.html
c757f653e3c8-1
Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. wait_for_futures() Wait for the given futures to complete. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a chain run.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.langchain.LangChainTracer.html
c757f653e3c8-2
End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶ Start a trace for an LLM run. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for an LLM run.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.langchain.LangChainTracer.html
c757f653e3c8-3
Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a tool run. wait_for_futures() → None[source]¶ Wait for the given futures to complete. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.langchain.LangChainTracer.html
c757f653e3c8-4
Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶ run_map: Dict[str, langchain.callbacks.tracers.schemas.Run]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.langchain.LangChainTracer.html
83bb685464d3-0
langchain.callbacks.tracers.langchain.log_error_once¶ langchain.callbacks.tracers.langchain.log_error_once(method: str, exception: Exception) → None[source]¶ Log an error once.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.langchain.log_error_once.html
79e78ece2219-0
langchain.callbacks.tracers.langchain.wait_for_all_tracers¶ langchain.callbacks.tracers.langchain.wait_for_all_tracers() → None[source]¶ Wait for all tracers to finish.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.langchain.wait_for_all_tracers.html
2356ad0105c4-0
langchain.callbacks.tracers.langchain_v1.get_headers¶ langchain.callbacks.tracers.langchain_v1.get_headers() → Dict[str, Any][source]¶ Get the headers for the LangChain API.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.langchain_v1.get_headers.html
5653e3cdfb87-0
langchain.callbacks.tracers.langchain_v1.LangChainTracerV1¶ class langchain.callbacks.tracers.langchain_v1.LangChainTracerV1(**kwargs: Any)[source]¶ Bases: BaseTracer An implementation of the SharedTracer that POSTS to the langchain endpoint. Initialize the LangChain tracer. Methods __init__(**kwargs) Initialize the LangChain tracer. load_default_session() Load the default tracing session and set it as the Tracer's session. load_session(session_name) Load a session with the given name from the tracer. on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs)
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.langchain_v1.LangChainTracerV1.html
5653e3cdfb87-1
on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline load_default_session() → Union[TracerSessionV1, TracerSession][source]¶ Load the default tracing session and set it as the Tracer’s session. load_session(session_name: str) → Union[TracerSessionV1, TracerSession][source]¶ Load a session with the given name from the tracer. on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.langchain_v1.LangChainTracerV1.html
5653e3cdfb87-2
Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶ Run on new LLM token. Only available when streaming is enabled.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.langchain_v1.LangChainTracerV1.html
5653e3cdfb87-3
Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.langchain_v1.LangChainTracerV1.html
5653e3cdfb87-4
Start a trace for a tool run. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶ run_map: Dict[str, langchain.callbacks.tracers.schemas.Run]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.langchain_v1.LangChainTracerV1.html
ae3db68e3c42-0
langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler¶ class langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler(example_id: Optional[Union[UUID, str]] = None, **kwargs: Any)[source]¶ Bases: BaseTracer A tracer that collects all nested runs in a list. This tracer is useful for inspection and evaluation purposes. Parameters example_id (Optional[Union[UUID, str]], default=None) – The ID of the example being traced. It can be either a UUID or a string. Initialize the RunCollectorCallbackHandler. Parameters example_id (Optional[Union[UUID, str]], default=None) – The ID of the example being traced. It can be either a UUID or a string. Methods __init__([example_id]) Initialize the RunCollectorCallbackHandler. on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html
ae3db68e3c42-1
Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. name raise_error run_inline on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a chain run.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html
ae3db68e3c42-2
End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for an LLM run.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html
ae3db68e3c42-3
Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a tool run. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html
ae3db68e3c42-4
property ignore_retriever: bool¶ Whether to ignore retriever callbacks. name = 'run-collector_callback_handler'¶ raise_error: bool = False¶ run_inline: bool = False¶ run_map: Dict[str, Run]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html
cdabc4f28e59-0
langchain.callbacks.tracers.schemas.BaseRun¶ class langchain.callbacks.tracers.schemas.BaseRun(*, uuid: str, parent_uuid: Optional[str] = None, start_time: datetime = None, end_time: datetime = None, extra: Optional[Dict[str, Any]] = None, execution_order: int, child_execution_order: int, serialized: Dict[str, Any], session_id: int, error: Optional[str] = None)[source]¶ Bases: BaseModel Base class for Run. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param child_execution_order: int [Required]¶ param end_time: datetime.datetime [Optional]¶ param error: Optional[str] = None¶ param execution_order: int [Required]¶ param extra: Optional[Dict[str, Any]] = None¶ param parent_uuid: Optional[str] = None¶ param serialized: Dict[str, Any] [Required]¶ param session_id: int [Required]¶ param start_time: datetime.datetime [Optional]¶ param uuid: str [Required]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.schemas.BaseRun.html
72253e86ac8a-0
langchain.callbacks.tracers.schemas.ChainRun¶ class langchain.callbacks.tracers.schemas.ChainRun(*, uuid: str, parent_uuid: Optional[str] = None, start_time: datetime = None, end_time: datetime = None, extra: Optional[Dict[str, Any]] = None, execution_order: int, child_execution_order: int, serialized: Dict[str, Any], session_id: int, error: Optional[str] = None, inputs: Dict[str, Any], outputs: Optional[Dict[str, Any]] = None, child_llm_runs: List[LLMRun] = None, child_chain_runs: List[ChainRun] = None, child_tool_runs: List[ToolRun] = None)[source]¶ Bases: BaseRun Class for ChainRun. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param child_chain_runs: List[langchain.callbacks.tracers.schemas.ChainRun] [Optional]¶ param child_execution_order: int [Required]¶ param child_llm_runs: List[langchain.callbacks.tracers.schemas.LLMRun] [Optional]¶ param child_tool_runs: List[langchain.callbacks.tracers.schemas.ToolRun] [Optional]¶ param end_time: datetime.datetime [Optional]¶ param error: Optional[str] = None¶ param execution_order: int [Required]¶ param extra: Optional[Dict[str, Any]] = None¶ param inputs: Dict[str, Any] [Required]¶ param outputs: Optional[Dict[str, Any]] = None¶ param parent_uuid: Optional[str] = None¶ param serialized: Dict[str, Any] [Required]¶ param session_id: int [Required]¶ param start_time: datetime.datetime [Optional]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.schemas.ChainRun.html
72253e86ac8a-1
param start_time: datetime.datetime [Optional]¶ param uuid: str [Required]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.schemas.ChainRun.html
c05f97855c6a-0
langchain.callbacks.tracers.schemas.LLMRun¶ class langchain.callbacks.tracers.schemas.LLMRun(*, uuid: str, parent_uuid: Optional[str] = None, start_time: datetime = None, end_time: datetime = None, extra: Optional[Dict[str, Any]] = None, execution_order: int, child_execution_order: int, serialized: Dict[str, Any], session_id: int, error: Optional[str] = None, prompts: List[str], response: Optional[LLMResult] = None)[source]¶ Bases: BaseRun Class for LLMRun. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param child_execution_order: int [Required]¶ param end_time: datetime.datetime [Optional]¶ param error: Optional[str] = None¶ param execution_order: int [Required]¶ param extra: Optional[Dict[str, Any]] = None¶ param parent_uuid: Optional[str] = None¶ param prompts: List[str] [Required]¶ param response: Optional[langchain.schema.output.LLMResult] = None¶ param serialized: Dict[str, Any] [Required]¶ param session_id: int [Required]¶ param start_time: datetime.datetime [Optional]¶ param uuid: str [Required]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.schemas.LLMRun.html
9c6189e8e023-0
langchain.callbacks.tracers.schemas.Run¶ class langchain.callbacks.tracers.schemas.Run(*, id: UUID, name: str, start_time: datetime, run_type: str, end_time: Optional[datetime] = None, extra: Optional[dict] = None, error: Optional[str] = None, serialized: Optional[dict] = None, events: Optional[List[Dict]] = None, inputs: dict, outputs: Optional[dict] = None, reference_example_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, execution_order: int, child_execution_order: int, child_runs: List[Run] = None)[source]¶ Bases: RunBase Run schema for the V2 API in the Tracer. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param child_execution_order: int [Required]¶ param child_runs: List[langchain.callbacks.tracers.schemas.Run] [Optional]¶ param end_time: Optional[<module 'datetime' from '/home/docs/.asdf/installs/python/3.11.4/lib/python3.11/datetime.py'>] = None¶ param error: Optional[str] = None¶ param events: Optional[List[Dict]] = None¶ param execution_order: int [Required]¶ param extra: Optional[dict] = None¶ param id: uuid.UUID [Required]¶ param inputs: dict [Required]¶ param name: str [Required]¶ param outputs: Optional[dict] = None¶ param parent_run_id: Optional[uuid.UUID] = None¶ param reference_example_id: Optional[uuid.UUID] = None¶ param run_type: str [Required]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.schemas.Run.html
9c6189e8e023-1
param run_type: str [Required]¶ param serialized: Optional[dict] = None¶ param start_time: <module 'datetime' from '/home/docs/.asdf/installs/python/3.11.4/lib/python3.11/datetime.py'> [Required]¶ param tags: Optional[List[str]] [Optional]¶ validator assign_name  »  all fields[source]¶ Assign name to the run.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.schemas.Run.html
48c563772719-0
langchain.callbacks.tracers.schemas.ToolRun¶ class langchain.callbacks.tracers.schemas.ToolRun(*, uuid: str, parent_uuid: Optional[str] = None, start_time: datetime = None, end_time: datetime = None, extra: Optional[Dict[str, Any]] = None, execution_order: int, child_execution_order: int, serialized: Dict[str, Any], session_id: int, error: Optional[str] = None, tool_input: str, output: Optional[str] = None, action: str, child_llm_runs: List[LLMRun] = None, child_chain_runs: List[ChainRun] = None, child_tool_runs: List[ToolRun] = None)[source]¶ Bases: BaseRun Class for ToolRun. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param action: str [Required]¶ param child_chain_runs: List[langchain.callbacks.tracers.schemas.ChainRun] [Optional]¶ param child_execution_order: int [Required]¶ param child_llm_runs: List[langchain.callbacks.tracers.schemas.LLMRun] [Optional]¶ param child_tool_runs: List[langchain.callbacks.tracers.schemas.ToolRun] [Optional]¶ param end_time: datetime.datetime [Optional]¶ param error: Optional[str] = None¶ param execution_order: int [Required]¶ param extra: Optional[Dict[str, Any]] = None¶ param output: Optional[str] = None¶ param parent_uuid: Optional[str] = None¶ param serialized: Dict[str, Any] [Required]¶ param session_id: int [Required]¶ param start_time: datetime.datetime [Optional]¶ param tool_input: str [Required]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.schemas.ToolRun.html
48c563772719-1
param tool_input: str [Required]¶ param uuid: str [Required]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.schemas.ToolRun.html
433a486ffd8f-0
langchain.callbacks.tracers.schemas.TracerSession¶ class langchain.callbacks.tracers.schemas.TracerSession(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None, tenant_id: UUID, id: UUID)[source]¶ Bases: TracerSessionBase TracerSessionV1 schema for the V2 API. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param extra: Optional[Dict[str, Any]] = None¶ param id: uuid.UUID [Required]¶ param name: Optional[str] = None¶ param start_time: datetime.datetime [Optional]¶ param tenant_id: uuid.UUID [Required]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.schemas.TracerSession.html
3e126c6e2f2d-0
langchain.callbacks.tracers.schemas.TracerSessionBase¶ class langchain.callbacks.tracers.schemas.TracerSessionBase(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None, tenant_id: UUID)[source]¶ Bases: TracerSessionV1Base A creation class for TracerSession. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param extra: Optional[Dict[str, Any]] = None¶ param name: Optional[str] = None¶ param start_time: datetime.datetime [Optional]¶ param tenant_id: uuid.UUID [Required]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.schemas.TracerSessionBase.html
0fec7a9e19cc-0
langchain.callbacks.tracers.schemas.TracerSessionV1¶ class langchain.callbacks.tracers.schemas.TracerSessionV1(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None, id: int)[source]¶ Bases: TracerSessionV1Base TracerSessionV1 schema. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param extra: Optional[Dict[str, Any]] = None¶ param id: int [Required]¶ param name: Optional[str] = None¶ param start_time: datetime.datetime [Optional]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.schemas.TracerSessionV1.html
b24bf4dc5c99-0
langchain.callbacks.tracers.schemas.TracerSessionV1Base¶ class langchain.callbacks.tracers.schemas.TracerSessionV1Base(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None)[source]¶ Bases: BaseModel Base class for TracerSessionV1. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param extra: Optional[Dict[str, Any]] = None¶ param name: Optional[str] = None¶ param start_time: datetime.datetime [Optional]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.schemas.TracerSessionV1Base.html
78e61ad7730b-0
langchain.callbacks.tracers.schemas.TracerSessionV1Create¶ class langchain.callbacks.tracers.schemas.TracerSessionV1Create(*, start_time: datetime = None, name: Optional[str] = None, extra: Optional[Dict[str, Any]] = None)[source]¶ Bases: TracerSessionV1Base Create class for TracerSessionV1. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param extra: Optional[Dict[str, Any]] = None¶ param name: Optional[str] = None¶ param start_time: datetime.datetime [Optional]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.schemas.TracerSessionV1Create.html
e822bdf04299-0
langchain.callbacks.tracers.stdout.ConsoleCallbackHandler¶ class langchain.callbacks.tracers.stdout.ConsoleCallbackHandler(**kwargs: Any)[source]¶ Bases: BaseTracer Tracer that prints to the console. Methods __init__(**kwargs) get_breadcrumbs(run) get_parents(run) on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.stdout.ConsoleCallbackHandler.html
e822bdf04299-1
Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. name raise_error run_inline get_breadcrumbs(run: Run) → str[source]¶ get_parents(run: Run) → List[Run][source]¶ on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a chain run.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.stdout.ConsoleCallbackHandler.html
e822bdf04299-2
Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever ends running.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.stdout.ConsoleCallbackHandler.html
e822bdf04299-3
Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a tool run. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. name = 'console_callback_handler'¶ raise_error: bool = False¶ run_inline: bool = False¶ run_map: Dict[str, Run]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.stdout.ConsoleCallbackHandler.html
3b2e07996811-0
langchain.callbacks.tracers.stdout.elapsed¶ langchain.callbacks.tracers.stdout.elapsed(run: Any) → str[source]¶ Get the elapsed time of a run. Parameters run – any object with a start_time and end_time attribute. Returns A string with the elapsed time in seconds ormilliseconds if time is less than a second.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.stdout.elapsed.html
62558d6d9c7d-0
langchain.callbacks.tracers.stdout.try_json_stringify¶ langchain.callbacks.tracers.stdout.try_json_stringify(obj: Any, fallback: str) → str[source]¶ Try to stringify an object to JSON. :param obj: Object to stringify. :param fallback: Fallback string to return if the object cannot be stringified. Returns A JSON string if the object can be stringified, otherwise the fallback string.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.stdout.try_json_stringify.html
0bd4def0bc55-0
langchain.callbacks.tracers.wandb.WandbRunArgs¶ class langchain.callbacks.tracers.wandb.WandbRunArgs[source]¶ Bases: TypedDict Arguments for the WandbTracer. Methods __init__(*args, **kwargs) clear() copy() fromkeys([value]) Create a new dictionary with keys from iterable and values set to value. get(key[, default]) Return the value for key if key is in the dictionary, else default. items() keys() pop(k[,d]) If the key is not found, return the default if given; otherwise, raise a KeyError. popitem() Remove and return a (key, value) pair as a 2-tuple. setdefault(key[, default]) Insert key with a value of default if key is not in the dictionary. update([E, ]**F) If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] values() Attributes job_type dir config project entity reinit tags group name notes magic config_exclude_keys config_include_keys anonymous mode allow_val_change resume force tensorboard sync_tensorboard monitor_gym save_code id settings clear() → None.  Remove all items from D.¶ copy() → a shallow copy of D¶ fromkeys(value=None, /)¶ Create a new dictionary with keys from iterable and values set to value. get(key, default=None, /)¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.wandb.WandbRunArgs.html
0bd4def0bc55-1
get(key, default=None, /)¶ Return the value for key if key is in the dictionary, else default. items() → a set-like object providing a view on D's items¶ keys() → a set-like object providing a view on D's keys¶ pop(k[, d]) → v, remove specified key and return the corresponding value.¶ If the key is not found, return the default if given; otherwise, raise a KeyError. popitem()¶ Remove and return a (key, value) pair as a 2-tuple. Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty. setdefault(key, default=None, /)¶ Insert key with a value of default if key is not in the dictionary. Return the value for key if key is in the dictionary, else default. update([E, ]**F) → None.  Update D from dict/iterable E and F.¶ If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] values() → an object providing a view on D's values¶ allow_val_change: Optional[bool]¶ anonymous: Optional[str]¶ config: Union[Dict, str, None]¶ config_exclude_keys: Optional[List[str]]¶ config_include_keys: Optional[List[str]]¶ dir: Optional[StrPath]¶ entity: Optional[str]¶ force: Optional[bool]¶ group: Optional[str]¶ id: Optional[str]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.wandb.WandbRunArgs.html
0bd4def0bc55-2
group: Optional[str]¶ id: Optional[str]¶ job_type: Optional[str]¶ magic: Optional[Union[dict, str, bool]]¶ mode: Optional[str]¶ monitor_gym: Optional[bool]¶ name: Optional[str]¶ notes: Optional[str]¶ project: Optional[str]¶ reinit: Optional[bool]¶ resume: Optional[Union[bool, str]]¶ save_code: Optional[bool]¶ settings: Union[WBSettings, Dict[str, Any], None]¶ sync_tensorboard: Optional[bool]¶ tags: Optional[Sequence]¶ tensorboard: Optional[bool]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.wandb.WandbRunArgs.html
faa8f771f23d-0
langchain.callbacks.tracers.wandb.WandbTracer¶ class langchain.callbacks.tracers.wandb.WandbTracer(run_args: Optional[WandbRunArgs] = None, **kwargs: Any)[source]¶ Bases: BaseTracer Callback Handler that logs to Weights and Biases. This handler will log the model architecture and run traces to Weights and Biases. This will ensure that all LangChain activity is logged to W&B. Initializes the WandbTracer. Parameters run_args – (dict, optional) Arguments to pass to wandb.init(). If not provided, wandb.init() will be called with no arguments. Please refer to the wandb.init for more details. To use W&B to monitor all LangChain activity, add this tracer like any other LangChain callback: ``` from wandb.integration.langchain import WandbTracer tracer = WandbTracer() chain = LLMChain(llm, callbacks=[tracer]) # …end of notebook / script: tracer.finish() ``` Methods __init__([run_args]) Initializes the WandbTracer. finish() Waits for all asynchronous processes to finish and data to upload. on_agent_action(action, *, run_id[, ...]) Run on agent action. on_agent_finish(finish, *, run_id[, ...]) Run on agent end. on_chain_end(outputs, *, run_id, **kwargs) End a trace for a chain run. on_chain_error(error, *, run_id, **kwargs) Handle an error for a chain run. on_chain_start(serialized, inputs, *, run_id) Start a trace for a chain run. on_chat_model_start(serialized, messages, *, ...)
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.wandb.WandbTracer.html
faa8f771f23d-1
on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, *, run_id, **kwargs) End a trace for an LLM run. on_llm_error(error, *, run_id, **kwargs) Handle an error for an LLM run. on_llm_new_token(token, *, run_id[, ...]) Run on new LLM token. on_llm_start(serialized, prompts, *, run_id) Start a trace for an LLM run. on_retriever_end(documents, *, run_id, **kwargs) Run when Retriever ends running. on_retriever_error(error, *, run_id, **kwargs) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, *, run_id[, parent_run_id]) Run on arbitrary text. on_tool_end(output, *, run_id, **kwargs) End a trace for a tool run. on_tool_error(error, *, run_id, **kwargs) Handle an error for a tool run. on_tool_start(serialized, input_str, *, run_id) Start a trace for a tool run. Attributes ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline finish() → None[source]¶ Waits for all asynchronous processes to finish and data to upload.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.wandb.WandbTracer.html
faa8f771f23d-2
Waits for all asynchronous processes to finish and data to upload. Proxy for wandb.finish(). on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent action. on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on agent end. on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a chain run. on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a chain run. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a chain run. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for an LLM run. on_llm_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for an LLM run.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.wandb.WandbTracer.html
faa8f771f23d-3
Handle an error for an LLM run. on_llm_new_token(token: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶ Run on new LLM token. Only available when streaming is enabled. on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for an LLM run. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Run when Retriever starts running. on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run on arbitrary text. on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → None¶ End a trace for a tool run. on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, **kwargs: Any) → None¶ Handle an error for a tool run.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.wandb.WandbTracer.html
faa8f771f23d-4
Handle an error for a tool run. on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶ Start a trace for a tool run. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶ run_map: Dict[str, Run]¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.tracers.wandb.WandbTracer.html
dacefe356ef6-0
langchain.callbacks.utils.flatten_dict¶ langchain.callbacks.utils.flatten_dict(nested_dict: Dict[str, Any], parent_key: str = '', sep: str = '_') → Dict[str, Any][source]¶ Flattens a nested dictionary into a flat dictionary. Parameters nested_dict (dict) – The nested dictionary to flatten. parent_key (str) – The prefix to prepend to the keys of the flattened dict. sep (str) – The separator to use between the parent key and the key of the flattened dictionary. Returns A flat dictionary. Return type (dict)
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.utils.flatten_dict.html
edd5ae962a1c-0
langchain.callbacks.utils.hash_string¶ langchain.callbacks.utils.hash_string(s: str) → str[source]¶ Hash a string using sha1. Parameters s (str) – The string to hash. Returns The hashed string. Return type (str)
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.utils.hash_string.html
5f949b528792-0
langchain.callbacks.utils.import_pandas¶ langchain.callbacks.utils.import_pandas() → Any[source]¶ Import the pandas python package and raise an error if it is not installed.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.utils.import_pandas.html
a426f57ffe0a-0
langchain.callbacks.utils.import_spacy¶ langchain.callbacks.utils.import_spacy() → Any[source]¶ Import the spacy python package and raise an error if it is not installed.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.utils.import_spacy.html
62b0f80f8128-0
langchain.callbacks.utils.import_textstat¶ langchain.callbacks.utils.import_textstat() → Any[source]¶ Import the textstat python package and raise an error if it is not installed.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.utils.import_textstat.html
56be984ddd6b-0
langchain.callbacks.utils.load_json¶ langchain.callbacks.utils.load_json(json_path: Union[str, Path]) → str[source]¶ Load json file to a string. Parameters json_path (str) – The path to the json file. Returns The string representation of the json file. Return type (str)
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.utils.load_json.html
5e4e71660d1e-0
langchain.callbacks.wandb_callback.analyze_text¶ langchain.callbacks.wandb_callback.analyze_text(text: str, complexity_metrics: bool = True, visualize: bool = True, nlp: Any = None, output_dir: Optional[Union[str, Path]] = None) → dict[source]¶ Analyze text using textstat and spacy. Parameters text (str) – The text to analyze. complexity_metrics (bool) – Whether to compute complexity metrics. visualize (bool) – Whether to visualize the text. nlp (spacy.lang) – The spacy language model to use for visualization. output_dir (str) – The directory to save the visualization files to. Returns A dictionary containing the complexity metrics and visualizationfiles serialized in a wandb.Html element. Return type (dict)
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.wandb_callback.analyze_text.html
aa98d8b6ebf6-0
langchain.callbacks.wandb_callback.construct_html_from_prompt_and_generation¶ langchain.callbacks.wandb_callback.construct_html_from_prompt_and_generation(prompt: str, generation: str) → Any[source]¶ Construct an html element from a prompt and a generation. Parameters prompt (str) – The prompt. generation (str) – The generation. Returns The html element. Return type (wandb.Html)
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.wandb_callback.construct_html_from_prompt_and_generation.html
97121e1950f4-0
langchain.callbacks.wandb_callback.import_wandb¶ langchain.callbacks.wandb_callback.import_wandb() → Any[source]¶ Import the wandb python package and raise an error if it is not installed.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.wandb_callback.import_wandb.html
cf57c496b516-0
langchain.callbacks.wandb_callback.load_json_to_dict¶ langchain.callbacks.wandb_callback.load_json_to_dict(json_path: Union[str, Path]) → dict[source]¶ Load json file to a dictionary. Parameters json_path (str) – The path to the json file. Returns The dictionary representation of the json file. Return type (dict)
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.wandb_callback.load_json_to_dict.html
a9c3a8e68a79-0
langchain.callbacks.wandb_callback.WandbCallbackHandler¶ class langchain.callbacks.wandb_callback.WandbCallbackHandler(job_type: Optional[str] = None, project: Optional[str] = 'langchain_callback_demo', entity: Optional[str] = None, tags: Optional[Sequence] = None, group: Optional[str] = None, name: Optional[str] = None, notes: Optional[str] = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False)[source]¶ Bases: BaseMetadataCallbackHandler, BaseCallbackHandler Callback Handler that logs to Weights and Biases. Parameters job_type (str) – The type of job. project (str) – The project to log to. entity (str) – The entity to log to. tags (list) – The tags to log. group (str) – The group to log to. name (str) – The name of the run. notes (str) – The notes to log. visualize (bool) – Whether to visualize the run. complexity_metrics (bool) – Whether to log complexity metrics. stream_logs (bool) – Whether to stream callback actions to W&B This handler will utilize the associated callback method called and formats the input of each callback function with metadata regarding the state of LLM run, and adds the response to the list of records for both the {method}_records and action. It then logs the response using the run.log() method to Weights and Biases. Initialize callback handler. Methods __init__([job_type, project, entity, tags, ...]) Initialize callback handler. flush_tracker([langchain_asset, reset, ...]) Flush the tracker and reset the session. get_custom_callback_meta()
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.wandb_callback.WandbCallbackHandler.html
a9c3a8e68a79-1
Flush the tracker and reset the session. get_custom_callback_meta() on_agent_action(action, **kwargs) Run on agent action. on_agent_finish(finish, **kwargs) Run when agent ends running. on_chain_end(outputs, **kwargs) Run when chain ends running. on_chain_error(error, **kwargs) Run when chain errors. on_chain_start(serialized, inputs, **kwargs) Run when chain starts running. on_chat_model_start(serialized, messages, *, ...) Run when a chat model starts running. on_llm_end(response, **kwargs) Run when LLM ends running. on_llm_error(error, **kwargs) Run when LLM errors. on_llm_new_token(token, **kwargs) Run when LLM generates a new token. on_llm_start(serialized, prompts, **kwargs) Run when LLM starts. on_retriever_end(documents, *, run_id[, ...]) Run when Retriever ends running. on_retriever_error(error, *, run_id[, ...]) Run when Retriever errors. on_retriever_start(serialized, query, *, run_id) Run when Retriever starts running. on_text(text, **kwargs) Run when agent is ending. on_tool_end(output, **kwargs) Run when tool ends running. on_tool_error(error, **kwargs) Run when tool errors. on_tool_start(serialized, input_str, **kwargs) Run when tool starts running. reset_callback_meta() Reset the callback metadata. Attributes always_verbose Whether to call verbose callbacks even if verbose is False. ignore_agent Whether to ignore agent callbacks. ignore_chain
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.wandb_callback.WandbCallbackHandler.html
a9c3a8e68a79-2
ignore_agent Whether to ignore agent callbacks. ignore_chain Whether to ignore chain callbacks. ignore_chat_model Whether to ignore chat model callbacks. ignore_llm Whether to ignore LLM callbacks. ignore_retriever Whether to ignore retriever callbacks. raise_error run_inline flush_tracker(langchain_asset: Any = None, reset: bool = True, finish: bool = False, job_type: Optional[str] = None, project: Optional[str] = None, entity: Optional[str] = None, tags: Optional[Sequence] = None, group: Optional[str] = None, name: Optional[str] = None, notes: Optional[str] = None, visualize: Optional[bool] = None, complexity_metrics: Optional[bool] = None) → None[source]¶ Flush the tracker and reset the session. Parameters langchain_asset – The langchain asset to save. reset – Whether to reset the session. finish – Whether to finish the run. job_type – The job type. project – The project. entity – The entity. tags – The tags. group – The group. name – The name. notes – The notes. visualize – Whether to visualize. complexity_metrics – Whether to compute complexity metrics. Returns – None get_custom_callback_meta() → Dict[str, Any]¶ on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶ Run on agent action. on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶ Run when agent ends running. on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain ends running. on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when chain errors.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.wandb_callback.WandbCallbackHandler.html
a9c3a8e68a79-3
Run when chain errors. on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶ Run when chain starts running. on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ Run when a chat model starts running. on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ Run when LLM ends running. on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when LLM errors. on_llm_new_token(token: str, **kwargs: Any) → None[source]¶ Run when LLM generates a new token. on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶ Run when LLM starts. on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever ends running. on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ Run when Retriever errors. on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.wandb_callback.WandbCallbackHandler.html
a9c3a8e68a79-4
Run when Retriever starts running. on_text(text: str, **kwargs: Any) → None[source]¶ Run when agent is ending. on_tool_end(output: str, **kwargs: Any) → None[source]¶ Run when tool ends running. on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶ Run when tool errors. on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶ Run when tool starts running. reset_callback_meta() → None¶ Reset the callback metadata. property always_verbose: bool¶ Whether to call verbose callbacks even if verbose is False. property ignore_agent: bool¶ Whether to ignore agent callbacks. property ignore_chain: bool¶ Whether to ignore chain callbacks. property ignore_chat_model: bool¶ Whether to ignore chat model callbacks. property ignore_llm: bool¶ Whether to ignore LLM callbacks. property ignore_retriever: bool¶ Whether to ignore retriever callbacks. raise_error: bool = False¶ run_inline: bool = False¶
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.wandb_callback.WandbCallbackHandler.html
b914ce646dc6-0
langchain.callbacks.whylabs_callback.import_langkit¶ langchain.callbacks.whylabs_callback.import_langkit(sentiment: bool = False, toxicity: bool = False, themes: bool = False) → Any[source]¶ Import the langkit python package and raise an error if it is not installed. Parameters sentiment – Whether to import the langkit.sentiment module. Defaults to False. toxicity – Whether to import the langkit.toxicity module. Defaults to False. themes – Whether to import the langkit.themes module. Defaults to False. Returns The imported langkit module.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.whylabs_callback.import_langkit.html
bbb087daf476-0
langchain.callbacks.whylabs_callback.WhyLabsCallbackHandler¶ class langchain.callbacks.whylabs_callback.WhyLabsCallbackHandler(logger: Logger)[source]¶ Bases: BaseCallbackHandler Callback Handler for logging to WhyLabs. This callback handler utilizes langkit to extract features from the prompts & responses when interacting with an LLM. These features can be used to guardrail, evaluate, and observe interactions over time to detect issues relating to hallucinations, prompt engineering, or output validation. LangKit is an LLM monitoring toolkit developed by WhyLabs. Here are some examples of what can be monitored with LangKit: * Text Quality readability score complexity and grade scores Text Relevance - Similarity scores between prompt/responses - Similarity scores against user-defined themes - Topic classification Security and Privacy - patterns - count of strings matching a user-defined regex pattern group - jailbreaks - similarity scores with respect to known jailbreak attempts - prompt injection - similarity scores with respect to known prompt attacks - refusals - similarity scores with respect to known LLM refusal responses Sentiment and Toxicity - sentiment analysis - toxicity analysis For more information, see https://docs.whylabs.ai/docs/language-model-monitoring or check out the LangKit repo here: https://github.com/whylabs/langkit — :param api_key: WhyLabs API key. Optional because the preferred way to specify the API key is with environment variable WHYLABS_API_KEY. Parameters org_id (Optional[str]) – WhyLabs organization id to write profiles to. Optional because the preferred way to specify the organization id is with environment variable WHYLABS_DEFAULT_ORG_ID. dataset_id (Optional[str]) – WhyLabs dataset id to write profiles to.
rtdocs\api.python.langchain.com\en\latest\callbacks\langchain.callbacks.whylabs_callback.WhyLabsCallbackHandler.html