id
stringlengths 14
16
| text
stringlengths 29
2.31k
| source
stringlengths 57
122
|
---|---|---|
4f8716b18c33-1
|
la programmation.", additional_kwargs={})
previous
Azure
next
PromptLayer ChatOpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/chat/integrations/openai.html
|
72a630025726-0
|
.ipynb
.pdf
How to stream responses
How to stream responses#
This notebook goes over how to use streaming with a chat model.
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
)
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
Verse 1:
Bubbles rising to the top
A refreshing drink that never stops
Clear and crisp, it's pure delight
A taste that's sure to excite
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Verse 2:
No sugar, no calories, just pure bliss
A drink that's hard to resist
It's the perfect way to quench my thirst
A drink that always comes first
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Bridge:
From the mountains to the sea
Sparkling water, you're the key
To a healthy life, a happy soul
A drink that makes me feel whole
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Outro:
Sparkling water, you're the one
A drink that's always so much fun
I'll never let you go, my friend
Sparkling
previous
How to use few shot examples
next
Integrations
By Harrison Chase
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/chat/examples/streaming.html
|
72a630025726-1
|
use few shot examples
next
Integrations
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/chat/examples/streaming.html
|
b804a0f1377a-0
|
.ipynb
.pdf
How to use few shot examples
Contents
Alternating Human/AI messages
System Messages
How to use few shot examples#
This notebook covers how to use few shot examples in chat models.
There does not appear to be solid consensus on how best to do few shot prompting. As a result, we are not solidifying any abstractions around this yet but rather using existing abstractions.
Alternating Human/AI messages#
The first way of doing few shot prompting relies on using alternating human/ai messages. See an example of this below.
from langchain.chat_models import ChatOpenAI
from langchain import PromptTemplate, LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
template="You are a helpful assistant that translates english to pirate."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = HumanMessagePromptTemplate.from_template("Hi")
example_ai = AIMessagePromptTemplate.from_template("Argh me mateys")
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
# get a chat completion from the formatted messages
chain.run("I love programming.")
"I be lovin' programmin', me hearty!"
System Messages#
OpenAI provides an optional name parameter that they also recommend using in conjunction with system messages to do few shot prompting. Here is an example of how to do that below.
template="You are a helpful assistant that translates english to
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/chat/examples/few_shot_examples.html
|
b804a0f1377a-1
|
an example of how to do that below.
template="You are a helpful assistant that translates english to pirate."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = SystemMessagePromptTemplate.from_template("Hi", additional_kwargs={"name": "example_user"})
example_ai = SystemMessagePromptTemplate.from_template("Argh me mateys", additional_kwargs={"name": "example_assistant"})
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
# get a chat completion from the formatted messages
chain.run("I love programming.")
"I be lovin' programmin', me hearty."
previous
How-To Guides
next
How to stream responses
Contents
Alternating Human/AI messages
System Messages
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/chat/examples/few_shot_examples.html
|
c8692420d21b-0
|
.rst
.pdf
通用功能
通用功能#
一系列关于如何使用LLM模型的例子。
如何使用 LLM 的异步 API
如何写一个自定义的LLM包装器
How (and why) to use the fake LLM
How to cache LLM calls
How to serialize LLM classes
如何实现 LLM 和 Chat Model 的流式响应
How to track token usage
previous
Getting Started
next
如何使用 LLM 的异步 API
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/how_to_guides.html
|
d94811d654d5-0
|
.ipynb
.pdf
Getting Started
Getting Started#
这篇笔记将介绍如何使用 LangChain 中的 LLM 类。
LLM 类是专门用于与 LLM(linguistic language model)进行接口交互的类。存在许多 LLM 供应商(例如 OpenAI、Cohere、Hugging Face 等),这个类旨在提供一个通用的标准接口。在文档的这一部分,我们将重点介绍通用 LLM 功能。有关使用特定 LLM 包装器的示例的详细信息,请参见 [How-To 部分]。(how_to_guides.rst).
这篇笔记中,我们将使用 OpenAI 的 LLM 包装器,尽管突出显示的所有功能对于所有 LLM 类型都是通用的。
from langchain.llms import OpenAI
llm = OpenAI(model_name="text-ada-001", n=2, best_of=2)
Generate Text: LLM 的最基本功能就是能够调用它,传入一个字符串并返回一个字符串。
llm("Tell me a joke")
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Generate: 更广义地说,您可以使用输入列表调用它,返回比仅文本更完整的响应。这种完整的响应包括多个高排名的响应,以及 LLM 提供者特定的信息。
llm_result = llm.generate(["Tell me a joke", "Tell me a
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/getting_started.html
|
d94811d654d5-1
|
= llm.generate(["Tell me a joke", "Tell me a poem"]*15)
len(llm_result.generations)
30
llm_result.generations[0]
[Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'),
Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side.')]
llm_result.generations[-1]
[Generation(text="\n\nWhat if love neverspeech\n\nWhat if love never ended\n\nWhat if love was only a feeling\n\nI'll never know this love\n\nIt's not a feeling\n\nBut it's what we have for each other\n\nWe just know that love is something strong\n\nAnd we can't help but be happy\n\nWe just feel what love is for us\n\nAnd we love each other with all our heart\n\nWe just don't know how\n\nHow it will go\n\nBut we know that love is something strong\n\nAnd we'll always have each other\n\nIn our lives."),
Generation(text='\n\nOnce upon a time\n\nThere was a love so pure and true\n\nIt lasted for centuries\n\nAnd never became stale or dry\n\nIt was moving and alive\n\nAnd the heart of the love-ick\n\nIs still beating strong and true.')]
您还可以访问返回的特定于提供者的信息。这些信息在提供者之间并没有标准化。
llm_result.llm_output
{'token_usage': {'completion_tokens': 3903,
'total_tokens': 4023,
'prompt_tokens': 120}}
Number of Tokens:
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/getting_started.html
|
d94811d654d5-2
|
'total_tokens': 4023,
'prompt_tokens': 120}}
Number of Tokens: 您还可以估计一个文本在该模型中有多少令牌。这是有用的,因为模型具有上下文长度(并且令牌越多越昂贵),这意味着您需要知道您要传递的文本有多长。
请注意,默认情况下,使用 tiktoken 估计令牌 (除了<3.8的版本以外,均使用HuggingFace令牌处理器)
llm.get_num_tokens("what a joke")
3
previous
LLMs (大语言模型)
next
通用功能
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/getting_started.html
|
1a85013e365f-0
|
.rst
.pdf
Integrations
Integrations#
The examples here are all “how-to” guides for how to integrate with various LLM providers.
AI21
Aleph Alpha
Anthropic
Azure OpenAI LLM Example
Banana
CerebriumAI LLM Example
Cohere
DeepInfra LLM Example
ForefrontAI LLM Example
GooseAI LLM Example
GPT4All
Hugging Face Hub
Llama-cpp
Manifest
Modal
OpenAI
Petals LLM Example
PromptLayer OpenAI
Replicate
SageMakerEndpoint
Self-Hosted Models via Runhouse
StochasticAI
Writer
previous
How to track token usage
next
AI21
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations.html
|
c2793d9c47dd-0
|
.ipynb
.pdf
Hugging Face Hub
Hugging Face Hub#
This example showcases how to connect to the Hugging Face Hub.
from langchain import PromptTemplate, HuggingFaceHub, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature":0, "max_length":64}))
question = "Who won the FIFA World Cup in the year 1994? "
print(llm_chain.run(question))
The FIFA World Cup is a football tournament that is played every 4 years. The year 1994 was the 44th FIFA World Cup. The final answer: Brazil.
previous
GPT4All
next
Llama-cpp
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/huggingface_hub.html
|
329c1321d18e-0
|
.ipynb
.pdf
Writer
Writer#
This example goes over how to use LangChain to interact with Writer models
from langchain.llms import Writer
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Writer()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
StochasticAI
next
LLMs
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/writer.html
|
7aa888cfceca-0
|
.ipynb
.pdf
Manifest
Contents
Compare HF Models
Manifest#
This notebook goes over how to use Manifest and LangChain.
For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifest
from manifest import Manifest
from langchain.llms.manifest import ManifestWrapper
manifest = Manifest(
client_name = "huggingface",
client_connection = "http://127.0.0.1:5000"
)
print(manifest.client.get_model_params())
{'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B'}
llm = ManifestWrapper(client=manifest, llm_kwargs={"temperature": 0.001, "max_tokens": 256})
# Map reduce example
from langchain import PromptTemplate
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
_prompt = """Write a concise summary of the following:
{text}
CONCISE SUMMARY:"""
prompt = PromptTemplate(template=_prompt, input_variables=["text"])
text_splitter = CharacterTextSplitter()
mp_chain = MapReduceChain.from_params(llm, prompt, text_splitter)
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
mp_chain.run(state_of_the_union)
'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing businesses to reopen. The president also said that the government will continue to send vaccines to 112 countries, more than any other nation. "We have lost so much to COVID-19," Trump said. "Time with one another. And worst of all, so much loss of
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/manifest.html
|
7aa888cfceca-1
|
Trump said. "Time with one another. And worst of all, so much loss of life." He said the CDC is working on a vaccine for kids under 5, and that the government will be ready with plenty of vaccines when they are available. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government is launching a "Test to Treat" initiative that will allow people to get tested at a pharmacy and get antiviral pills on the spot at no cost. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government will continue to send vaccines to 112 countries, more than any other nation. "We are coming for your'
Compare HF Models#
from langchain.model_laboratory import ModelLaboratory
manifest1 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5000"
),
llm_kwargs={"temperature": 0.01}
)
manifest2 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5001"
),
llm_kwargs={"temperature": 0.01}
)
manifest3 = ManifestWrapper(
client=Manifest(
client_name="huggingface",
client_connection="http://127.0.0.1:5002"
),
llm_kwargs={"temperature": 0.01}
)
llms = [manifest1, manifest2, manifest3]
model_lab =
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/manifest.html
|
7aa888cfceca-2
|
= [manifest1, manifest2, manifest3]
model_lab = ModelLaboratory(llms)
model_lab.compare("What color is a flamingo?")
Input:
What color is a flamingo?
ManifestWrapper
Params: {'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B', 'temperature': 0.01}
pink
ManifestWrapper
Params: {'model_name': 'EleutherAI/gpt-neo-125M', 'model_path': 'EleutherAI/gpt-neo-125M', 'temperature': 0.01}
A flamingo is a small, round
ManifestWrapper
Params: {'model_name': 'google/flan-t5-xl', 'model_path': 'google/flan-t5-xl', 'temperature': 0.01}
pink
previous
Llama-cpp
next
Modal
Contents
Compare HF Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/manifest.html
|
ed4f2c05b487-0
|
.ipynb
.pdf
ForefrontAI LLM Example
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
ForefrontAI LLM Example#
This notebook goes over how to use Langchain with ForefrontAI.
Imports#
import os
from langchain.llms import ForefrontAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from ForefrontAI. You are given a 5 day free trial to test different models.
os.environ["FOREFRONTAI_API_KEY"] = "YOUR_KEY_HERE"
Create the ForefrontAI instance#
You can specify different parameters such as the model endpoint url, length, temperature, etc. You must provide an endpoint url.
llm = ForefrontAI(endpoint_url="YOUR ENDPOINT URL HERE")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
DeepInfra LLM Example
next
GooseAI LLM Example
Contents
Imports
Set the Environment API Key
Create the ForefrontAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/forefrontai_example.html
|
85d2f7d338fd-0
|
.ipynb
.pdf
GPT4All
Contents
Specify Model
GPT4All#
This example goes over how to use LangChain to interact with GPT4All models
%pip install pyllamacpp > /dev/null
from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Specify Model#
To run locally, download a compatible ggml-formatted model. For more info, visit https://github.com/nomic-ai/pyllamacpp
Note that new models are uploaded regularly - check the link above for the most recent .bin URL
local_path = './models/gpt4all-lora-quantized-ggml.bin' # replace with your desired local file path
Uncomment the below block to download a model. You may want to update url to a new version.
# import requests
# from pathlib import Path
# from tqdm import tqdm
# Path(local_path).parent.mkdir(parents=True, exist_ok=True)
# # Example model. Check https://github.com/nomic-ai/pyllamacpp for the latest models.
# url = 'https://the-eye.eu/public/AI/models/nomic-ai/gpt4all/gpt4all-lora-quantized-ggml.bin'
# # send a GET request to the URL to download the file. Stream since it's large
# response = requests.get(url, stream=True)
# # open the file in binary mode and write the contents of the response to it in chunks
# # This is a large file, so be prepared to wait.
# with open(local_path, 'wb') as f:
# for
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/gpt4all.html
|
85d2f7d338fd-1
|
wait.
# with open(local_path, 'wb') as f:
# for chunk in tqdm(response.iter_content(chunk_size=8192)):
# if chunk:
# f.write(chunk)
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# Verbose is required to pass to the callback manager
llm = GPT4All(model=local_path, callback_manager=callback_manager, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
previous
GooseAI LLM Example
next
Hugging Face Hub
Contents
Specify Model
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/gpt4all.html
|
63a74f0c74c4-0
|
.ipynb
.pdf
GooseAI LLM Example
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
GooseAI LLM Example#
This notebook goes over how to use Langchain with GooseAI.
Install openai#
The openai package is required to use the GooseAI API. Install openai using pip3 install openai.
$ pip3 install openai
Imports#
import os
from langchain.llms import GooseAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from GooseAI. You are given $10 in free credits to test different models.
os.environ["GOOSEAI_API_KEY"] = "YOUR_KEY_HERE"
Create the GooseAI instance#
You can specify different parameters such as the model name, max tokens generated, temperature, etc.
llm = GooseAI()
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
ForefrontAI LLM Example
next
GPT4All
Contents
Install openai
Imports
Set the Environment API Key
Create the GooseAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/gooseai_example.html
|
63a74f0c74c4-1
|
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/gooseai_example.html
|
aa881f2cd67c-0
|
.ipynb
.pdf
StochasticAI
StochasticAI#
This example goes over how to use LangChain to interact with StochasticAI models
from langchain.llms import StochasticAI
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = StochasticAI(api_url="YOUR_API_URL")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Self-Hosted Models via Runhouse
next
Writer
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/stochasticai.html
|
904e4c58ef46-0
|
.ipynb
.pdf
Cohere
Cohere#
This example goes over how to use LangChain to interact with Cohere models
from langchain.llms import Cohere
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Cohere()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
" Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back one year. 1993.\n\n1993 was the year that the Dallas Cowboys won the Super Bowl. They won over the Buffalo Bills in Super Bowl 26.\n\nNow, let's do it backwards. According to our information, the Green Bay Packers last won the Super Bowl in the 2010-2011 season. Now, we can't go back in time, so let's go from 2011 when the Packers won the Super Bowl, back to 1984. That is the year that the Packers won the Super Bowl over the Raiders.\n\nSo, we have the year that Justin Beiber was born, 1994, and the year that the Packers last won the Super Bowl, 2011, and now we have to go in the middle, 1986. That is the year that the New York Giants won the Super Bowl over the Denver Broncos. The Giants won Super Bowl 21.\n\nThe New York Giants won the Super Bowl in 1986. This means that the Green Bay Packers won the Super Bowl in 2011.\n\nDid you get it right? If you are still a bit confused, just try to go
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/cohere.html
|
904e4c58ef46-1
|
you get it right? If you are still a bit confused, just try to go back to the question again and review the answer"
previous
CerebriumAI LLM Example
next
DeepInfra LLM Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/cohere.html
|
5c1c7322a63d-0
|
.ipynb
.pdf
Aleph Alpha
Aleph Alpha#
This example goes over how to use LangChain to interact with Aleph Alpha models
from langchain.llms import AlephAlpha
from langchain import PromptTemplate, LLMChain
template = """Q: {question}
A:"""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = AlephAlpha(model="luminous-extended", maximum_tokens=20, stop_sequences=["Q:"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What is AI?"
llm_chain.run(question)
' Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems.\n'
previous
AI21
next
Anthropic
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/aleph_alpha.html
|
169fb959fdd0-0
|
.ipynb
.pdf
Self-Hosted Models via Runhouse
Self-Hosted Models via Runhouse#
This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.
For more information, see Runhouse or the Runhouse docs.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
from langchain import PromptTemplate, LLMChain
import runhouse as rh
# For an on-demand A100 with GCP, Azure, or Lambda
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)
# For an on-demand A10G with AWS (no single A100s on AWS)
# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')
# For an existing cluster
# gpu = rh.cluster(ips=['<ip of the cluster>'],
# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},
# name='rh-a10x')
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = SelfHostedHuggingFaceLLM(model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
INFO | 2023-02-17
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/self_hosted_examples.html
|
169fb959fdd0-1
|
Beiber was born?"
llm_chain.run(question)
INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC
INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds
"\n\nLet's say we're talking sports teams who won the Super Bowl in the year Justin Beiber"
You can also load more custom models through the SelfHostedHuggingFaceLLM interface:
llm = SelfHostedHuggingFaceLLM(
model_id="google/flan-t5-small",
task="text2text-generation",
hardware=gpu,
)
llm("What is the capital of Germany?")
INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC
INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds
'berlin'
Using a custom load function, we can load a custom pipeline directly on the remote hardware:
def load_pipeline():
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline # Need to be inside the fn in notebooks
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
return pipe
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"][len(prompt):]
llm = SelfHostedHuggingFaceLLM(model_load_fn=load_pipeline,
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/self_hosted_examples.html
|
169fb959fdd0-2
|
= SelfHostedHuggingFaceLLM(model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn)
llm("Who is the current US president?")
INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC
INFO | 2023-02-17 05:42:59,522 | Time to send message: 0.3 seconds
'john w. bush'
You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:
pipeline = load_pipeline()
llm = SelfHostedPipeline.from_pipeline(
pipeline=pipeline, hardware=gpu, model_reqs=model_reqs
)
Instead, we can also send it to the hardware’s filesystem, which will be much faster.
rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to(gpu, path="models")
llm = SelfHostedPipeline.from_pipeline(pipeline="models/pipeline.pkl", hardware=gpu)
previous
SageMakerEndpoint
next
StochasticAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/self_hosted_examples.html
|
6307fa5fe2b3-0
|
.ipynb
.pdf
AI21
AI21#
This example goes over how to use LangChain to interact with AI21 models
from langchain.llms import AI21
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = AI21()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Integrations
next
Aleph Alpha
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/ai21.html
|
4902faf3bc0f-0
|
.ipynb
.pdf
SageMakerEndpoint
SageMakerEndpoint#
This notebooks goes over how to use an LLM hosted on a SageMaker endpoint.
!pip3 install langchain boto3
from langchain.docstore.document import Document
example_doc_1 = """
Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.
Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.
Therefore, Peter stayed with her at the hospital for 3 days without leaving.
"""
docs = [
Document(
page_content=example_doc_1,
)
]
from typing import Dict
from langchain import PromptTemplate, SagemakerEndpoint
from langchain.llms.sagemaker_endpoint import ContentHandlerBase
from langchain.chains.question_answering import load_qa_chain
import json
query = """How long was Elizabeth hospitalized?
"""
prompt_template = """Use the following pieces of context to answer the question at the end.
{context}
Question: {question}
Answer:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
class ContentHandler(ContentHandlerBase):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:
input_str = json.dumps({prompt: prompt, **model_kwargs})
return input_str.encode('utf-8')
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
return
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/sagemaker.html
|
4902faf3bc0f-1
|
= json.loads(output.read().decode("utf-8"))
return response_json[0]["generated_text"]
content_handler = ContentHandler()
chain = load_qa_chain(
llm=SagemakerEndpoint(
endpoint_name="endpoint-name",
credentials_profile_name="credentials-profile-name",
region_name="us-west-2",
model_kwargs={"temperature":1e-10},
content_handler=content_handler
),
prompt=PROMPT
)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
previous
Replicate
next
Self-Hosted Models via Runhouse
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/sagemaker.html
|
f3640c0f0eae-0
|
.ipynb
.pdf
Llama-cpp
Llama-cpp#
This notebook goes over how to run llama-cpp within LangChain
!pip install llama-cpp-python
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = LlamaCpp(model_path="./ggml-model-q4_0.bin")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
'\n\nWe know that Justin Bieber is currently 25 years old and that he was born on March 1st, 1994 and that he is a singer and he has an album called Purpose, so we know that he was born when Super Bowl XXXVIII was played between Dallas and Seattle and that it took place February 1st, 2004 and that the Seattle Seahawks won 24-21, so Seattle is our answer!'
previous
Hugging Face Hub
next
Manifest
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/llamacpp.html
|
f537a12ae079-0
|
.ipynb
.pdf
Modal
Modal#
This example goes over how to use LangChain to interact with Modal models
from langchain.llms import Modal
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Modal(endpoint_url="YOUR_ENDPOINT_URL")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Manifest
next
OpenAI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/modal.html
|
a26b3b3cba6d-0
|
.ipynb
.pdf
PromptLayer OpenAI
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
PromptLayer OpenAI#
This example showcases how to connect to PromptLayer to start recording your OpenAI requests.
Install PromptLayer#
The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip.
pip install promptlayer
Imports#
import os
from langchain.llms import PromptLayerOpenAI
import promptlayer
Set the Environment API Key#
You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar.
Set it as an environment variable called PROMPTLAYER_API_KEY.
os.environ["PROMPTLAYER_API_KEY"] = "********"
Use the PromptLayerOpenAI LLM like normal#
You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature.
llm = PromptLayerOpenAI(pl_tags=["langchain"])
llm("I am a cat and I want")
' to go outside\n\nUnfortunately, cats cannot go outside without being supervised by a human. Going outside can be dangerous for cats, as they may come into contact with cars, other animals, or other dangers. If you want to go outside, ask your human to take you on a supervised walk or to a safe, enclosed outdoor space.'
The above request should now appear on your PromptLayer dashboard.
Using PromptLayer Track#
If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id.
llm = PromptLayerOpenAI(return_pl_id=True)
llm_results = llm.generate(["Tell me a joke"])
for res in llm_results.generations:
pl_request_id = res[0].generation_info["pl_request_id"]
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/promptlayer_openai.html
|
a26b3b3cba6d-1
|
pl_request_id = res[0].generation_info["pl_request_id"]
promptlayer.track.score(request_id=pl_request_id, score=100)
Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well.
Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.
previous
Petals LLM Example
next
Replicate
Contents
Install PromptLayer
Imports
Set the Environment API Key
Use the PromptLayerOpenAI LLM like normal
Using PromptLayer Track
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/promptlayer_openai.html
|
192a4031979f-0
|
.ipynb
.pdf
DeepInfra LLM Example
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
DeepInfra LLM Example#
This notebook goes over how to use Langchain with DeepInfra.
Imports#
import os
from langchain.llms import DeepInfra
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from DeepInfra. You are given a 1 hour free of serverless GPU compute to test different models.
You can print your token with deepctl auth token
os.environ["DEEPINFRA_API_TOKEN"] = "YOUR_KEY_HERE"
Create the DeepInfra instance#
Make sure to deploy your model first via deepctl deploy create -m google/flat-t5-xl (for example)
llm = DeepInfra(model_id="DEPLOYED MODEL ID")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in 2015?"
llm_chain.run(question)
previous
Cohere
next
ForefrontAI LLM Example
Contents
Imports
Set the Environment API Key
Create the DeepInfra instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18,
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/deepinfra_example.html
|
192a4031979f-1
|
Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/deepinfra_example.html
|
45880f4ababe-0
|
.ipynb
.pdf
Banana
Banana#
This example goes over how to use LangChain to interact with Banana models
import os
from langchain.llms import Banana
from langchain import PromptTemplate, LLMChain
os.environ["BANANA_API_KEY"] = "YOUR_API_KEY"
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Banana(model_key="YOUR_MODEL_KEY")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Azure OpenAI LLM Example
next
CerebriumAI LLM Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/banana.html
|
70e17a7d0590-0
|
.ipynb
.pdf
Anthropic
Anthropic#
This example goes over how to use LangChain to interact with Anthropic models
from langchain.llms import Anthropic
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = Anthropic()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
" Step 1: Justin Beiber was born on March 1, 1994\nStep 2: The NFL season ends with the Super Bowl in January/February\nStep 3: Therefore, the Super Bowl that occurred closest to Justin Beiber's birth would be Super Bowl XXIX in 1995\nStep 4: The San Francisco 49ers won Super Bowl XXIX in 1995\n\nTherefore, the answer is the San Francisco 49ers won the Super Bowl in the year Justin Beiber was born."
previous
Aleph Alpha
next
Azure OpenAI LLM Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/anthropic_example.html
|
f968b900eed0-0
|
.ipynb
.pdf
OpenAI
OpenAI#
This example goes over how to use LangChain to interact with OpenAI models
from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = OpenAI()
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
' Justin Bieber was born in 1994, so the NFL team that won the Super Bowl in that year was the Dallas Cowboys.'
previous
Modal
next
Petals LLM Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/openai.html
|
f142b8964d48-0
|
.ipynb
.pdf
CerebriumAI LLM Example
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
CerebriumAI LLM Example#
This notebook goes over how to use Langchain with CerebriumAI.
Install cerebrium#
The cerebrium package is required to use the CerebriumAI API. Install cerebrium using pip3 install cerebrium.
$ pip3 install cerebrium
Imports#
import os
from langchain.llms import CerebriumAI
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from CerebriumAI. You are given a 1 hour free of serverless GPU compute to test different models.
os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE"
Create the CerebriumAI instance#
You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url.
llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
Banana
next
Cohere
Contents
Install cerebrium
Imports
Set the Environment API Key
Create
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/cerebriumai_example.html
|
f142b8964d48-1
|
Contents
Install cerebrium
Imports
Set the Environment API Key
Create the CerebriumAI instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/cerebriumai_example.html
|
257f0c144e62-0
|
.ipynb
.pdf
Petals LLM Example
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
Petals LLM Example#
This notebook goes over how to use Langchain with Petals.
Install petals#
The petals package is required to use the Petals API. Install petals using pip3 install petals.
$ pip3 install petals
Imports#
import os
from langchain.llms import Petals
from langchain import PromptTemplate, LLMChain
Set the Environment API Key#
Make sure to get your API key from Huggingface.
os.environ["HUGGINGFACE_API_KEY"] = "YOUR_KEY_HERE"
Create the Petals instance#
You can specify different parameters such as the model name, max new tokens, temperature, etc.
llm = Petals(model_name="bigscience/bloom-petals")
Create a Prompt Template#
We will create a prompt template for Question and Answer.
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Initiate the LLMChain#
llm_chain = LLMChain(prompt=prompt, llm=llm)
Run the LLMChain#
Provide a question and run the LLMChain.
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
previous
OpenAI
next
PromptLayer OpenAI
Contents
Install petals
Imports
Set the Environment API Key
Create the Petals instance
Create a Prompt Template
Initiate the LLMChain
Run the LLMChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/petals_example.html
|
257f0c144e62-1
|
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/petals_example.html
|
87d29de4f56b-0
|
.ipynb
.pdf
Azure OpenAI LLM Example
Contents
API configuration
Deployments
Azure OpenAI LLM Example#
This notebook goes over how to use Langchain with Azure OpenAI.
The Azure OpenAI API is compatible with OpenAI’s API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.
API configuration#
You can configure the openai package to use Azure OpenAI using environment variables. The following is for bash:
# Set this to `azure`
export OPENAI_API_TYPE=azure
# The API version you want to use: set this to `2022-12-01` for the released version.
export OPENAI_API_VERSION=2022-12-01
# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_BASE=https://your-resource-name.openai.azure.com
# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_KEY=<your Azure OpenAI API key>
Alternatively, you can configure the API right within your running Python environment:
import os
os.environ["OPENAI_API_TYPE"] = "azure"
...
Deployments#
With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.
Let’s say your deployment name is text-davinci-002-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example:
import openai
response = openai.Completion.create(
engine="text-davinci-002-prod",
prompt="This is a
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/azure_openai_example.html
|
87d29de4f56b-1
|
engine="text-davinci-002-prod",
prompt="This is a test",
max_tokens=5
)
# Import Azure OpenAI
from langchain.llms import AzureOpenAI
# Create an instance of Azure OpenAI
# Replace the deployment name with your own
llm = AzureOpenAI(deployment_name="text-davinci-002-prod", model_name="text-davinci-002")
# Run the LLM
llm("Tell me a joke")
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
We can also print the LLM and see its custom print.
print(llm)
AzureOpenAI
Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
previous
Anthropic
next
Banana
Contents
API configuration
Deployments
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/azure_openai_example.html
|
ae1494f30548-0
|
.ipynb
.pdf
Replicate
Contents
Setup
Calling a model
Chaining Calls
Replicate#
This example goes over how to use LangChain to interact with Replicate models
import os
from langchain.llms import Replicate
from langchain import PromptTemplate, LLMChain
os.environ["REPLICATE_API_TOKEN"] = "YOUR REPLICATE API TOKEN"
Setup#
To run this notebook, you’ll need to create a replicate account and install the replicate python client.
Calling a model#
Find a model on the replicate explore page, and then paste in the model name and version in this format: model_name/version
For example, for this flan-t5 model, click on the API tab. The model name/version would be: daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8
Only the model param is required, but we can add other model params when initializing.
For example, if we were running stable diffusion and wanted to change the image dimensions:
Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})
Note that only the first output of a model will be returned.
llm = Replicate(model="daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8")
prompt = """
Answer the following yes/no question by reasoning step by step.
Can a dog drive a car?
"""
llm(prompt)
'The legal
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/replicate.html
|
ae1494f30548-1
|
step by step.
Can a dog drive a car?
"""
llm(prompt)
'The legal driving age of dogs is 2. Cars are designed for humans to drive. Therefore, the final answer is yes.'
We can call any replicate model using this syntax. For example, we can call stable diffusion.
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf",
input={'image_dimensions': '512x512'})
image_output = text2image("A cat riding a motorcycle by Picasso")
image_output
'https://replicate.delivery/pbxt/Cf07B1zqzFQLOSBQcKG7m9beE74wf7kuip5W9VxHJFembefKE/out-0.png'
The model spits out a URL. Let’s render it.
from PIL import Image
import requests
from io import BytesIO
response = requests.get(image_output)
img = Image.open(BytesIO(response.content))
img
Chaining Calls#
The whole point of langchain is to… chain! Here’s an example of how do that.
from langchain.chains import SimpleSequentialChain
First, let’s define the LLM for this model as a flan-5, and text2image as a stable diffusion model.
llm = Replicate(model="daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8")
text2image =
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/replicate.html
|
ae1494f30548-2
|
= Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf")
First prompt in the chain
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
Second prompt to get the logo for company description
second_prompt = PromptTemplate(
input_variables=["company_name"],
template="Write a description of a logo for this company: {company_name}",
)
chain_two = LLMChain(llm=llm, prompt=second_prompt)
Third prompt, let’s create the image based on the description output from prompt 2
third_prompt = PromptTemplate(
input_variables=["company_logo_description"],
template="{company_logo_description}",
)
chain_three = LLMChain(llm=text2image, prompt=third_prompt)
Now let’s run it!
# Run the chain specifying only the input variable for the first chain.
overall_chain = SimpleSequentialChain(chains=[chain, chain_two, chain_three], verbose=True)
catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
> Entering new SimpleSequentialChain chain...
novelty socks
todd & co.
https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png
> Finished
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/replicate.html
|
ae1494f30548-3
|
Finished chain.
https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png
response = requests.get("https://replicate.delivery/pbxt/eq6foRJngThCAEBqse3nL3Km2MBfLnWQNd0Hy2SQRo2LuprCB/out-0.png")
img = Image.open(BytesIO(response.content))
img
previous
PromptLayer OpenAI
next
SageMakerEndpoint
Contents
Setup
Calling a model
Chaining Calls
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/integrations/replicate.html
|
004bde5db0e5-0
|
.ipynb
.pdf
如何使用 LLM 的异步 API
如何使用 LLM 的异步 API#
LangChain 利用 asyncio 库为 LLM 提供了异步支持。
异步支持特别适用于同时调用多个 LLM,因为这些调用都是网络绑定的。目前,支持 OpenAI, PromptLayerOpenAI, ChatOpenAI 和 Anthropic,但将在未来实现其他 LLM 的异步支持。
您可以使用 agenerate 方法异步调用 OpenAI 的 LLM。
import time
import asyncio
from langchain.llms import OpenAI
def generate_serially():
llm = OpenAI(temperature=0.9)
for _ in range(10):
resp = llm.generate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def async_generate(llm):
resp = await llm.agenerate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def generate_concurrently():
llm = OpenAI(temperature=0.9)
tasks = [async_generate(llm) for _ in range(10)]
await asyncio.gather(*tasks)
s = time.perf_counter()
# If running this outside of Jupyter, use asyncio.run(generate_concurrently())
await generate_concurrently()
elapsed = time.perf_counter() - s
print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m')
s = time.perf_counter()
generate_serially()
elapsed = time.perf_counter() - s
print('\033[1m'
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/async_llm.html
|
004bde5db0e5-1
|
= time.perf_counter() - s
print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m')
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, how about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you! How about you?
I'm doing well, thank you. How about you?
Concurrent executed in 1.39 seconds.
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thanks for asking. How about you?
I'm doing well, thanks! How about you?
I'm doing well, thank you. How about you?
I'm doing well, thank you. How about yourself?
I'm doing well, thanks for asking. How about you?
Serial executed in 5.77 seconds.
这段代码展示了如何使用异步 API 来调用 OpenAI 的 LLM。
首先,定义了一个生成文本的函数 generate_serially,并进行了循环调用,依次生成文本。
接着,定义了一个异步函数 async_generate,该函数使用了 await 关键字,表示函数调用将异步执行。
该函数调用了 OpenAI 的 agenerate 方法来生成文本,生成的文本通过
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/async_llm.html
|
004bde5db0e5-2
|
OpenAI 的 agenerate 方法来生成文本,生成的文本通过 print() 输出。
最后,定义了一个并发生成的 generate_concurrently 函数,它与 async_generate 函数结合起来使用,用于并发地调用 LLM 来生成文本。
在主函数运行时,
首先调用了异步函数 await generate_concurrently(),并计算了执行该函数所需的时间,以便与后续序列执行函数的时间进行比较。
然后,调用了序列执行函数 generate_serially(),并计算了它的执行时间。
由于异步 API 可以同时调用多个 LLM,所以并发执行函数的执行时间应该比序列执行函数的执行时间更短。
tasks = [async_generate(llm) for _ in range(10)]
await asyncio.gather(*tasks)
这是 Python 中的列表解析语法,用于创建一个包含多个异步生成器对象的列表,其中 _ 表示一个无关紧要的变量,而 range(10) 表示循环 10 次。
在 asyncio.gather(*tasks) 中,星号运算符 * 用于解包 tasks 列表,将其作为单独的参数列表传递给 asyncio.gather() 函数。这样,每个 async_generate(llm) 函数就会以并发方式执行,从而加速整个过程。最终结果是,由 async_generate(llm) 函数生成的文本数据会返回一个包含在 gather()
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/async_llm.html
|
004bde5db0e5-3
|
async_generate(llm) 函数生成的文本数据会返回一个包含在 gather() 函数返回值的列表中,该列表包含了所有并发执行的结果。
previous
通用功能
next
如何写一个自定义的LLM包装器
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/async_llm.html
|
c44bb7a6b526-0
|
.ipynb
.pdf
如何写一个自定义的LLM包装器
如何写一个自定义的LLM包装器#
这个notebook将介绍如何创建一个自定义的LLM包装器,以便您可以使用自己的LLM或与LangChain支持的不同包装器。自定义LLM只需要实现一个必需的方法:
_call 方法,它接受一个字符串输入和一个可选的停用词列表,并返回一个字符串结果。
也可以实现一个可选的方法:
_identifying_params 属性,用于帮助打印该类。应返回一个字典。
我们来实现一个非常简单的自定义LLM,它只返回输入文本的前N个字符。
from langchain.llms.base import LLM
from typing import Optional, List, Mapping, Any
class CustomLLM(LLM):
n: int
@property
def _llm_type(self) -> str:
return "custom"
def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
return prompt[:self.n]
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"n":
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/custom_llm.html
|
c44bb7a6b526-1
|
"""Get the identifying parameters."""
return {"n": self.n}
我们现在可以像使用任何其他LLM那样使用这个自定义的LLM了。
llm = CustomLLM(n=10)
llm("This is a foobar thing")
'This is a '
我们还可以打印这个LLM并查看自定义的打印效果。
print(llm)
CustomLLM
Params: {'n': 10}
previous
如何使用 LLM 的异步 API
next
How (and why) to use the fake LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/custom_llm.html
|
f0161e6baec1-0
|
.ipynb
.pdf
如何实现 LLM 和 Chat Model 的流式响应
如何实现 LLM 和 Chat Model 的流式响应#
LangChain为LLMs提供了流式支持。目前,我们支持使用OpenAI,ChatOpenAI和Anthropic实现进行流式传输,在路线图中也计划为其他LLM实现提供流式支持。要使用流式传输,需要实现on_llm_new_token的CallbackHandler 。在这个例子中,我们使用StreamingStdOutCallbackHandler。
from langchain.llms import OpenAI, Anthropic
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema import HumanMessage
llm = OpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = llm("Write me a song about sparkling water.")
Verse 1
I'm sippin' on sparkling water,
It's so refreshing and light,
It's the perfect way to quench my thirst
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
Verse 2
I'm sippin' on sparkling water,
It's so bubbly and bright,
It's the perfect way to cool me down
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
Verse 3
I'm sippin' on sparkling water,
It's so light and so clear,
It's the perfect way to
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/streaming_llm.html
|
f0161e6baec1-1
|
on sparkling water,
It's so light and so clear,
It's the perfect way to keep me cool
On a hot summer night.
Chorus
Sparkling water, sparkling water,
It's the best way to stay hydrated,
It's so crisp and so clean,
It's the perfect way to stay refreshed.
使用 generate 结束的时候,我们仍然可以访问的 LLMResult。但是,对于流式传输,token_usage目前则无法支持。
llm.generate(["Tell me a joke."])
Q: What did the fish say when it hit the wall?
A: Dam!
LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', generation_info={'finish_reason': None, 'logprobs': None})]], llm_output={'token_usage': {}, 'model_name': 'text-davinci-003'})
基于 ChatOpenAI 聊天模型的示例:
chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
Verse 1:
Bubbles rising to the top
A refreshing drink that never stops
Clear and crisp, it's oh so pure
Sparkling water, I can't ignore
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Verse 2:
No sugar, no calories, just H2O
A drink that's good for me, don't you know
With lemon or lime, you're even better
Sparkling water, you're my forever
Chorus:
Sparkling water, oh how you
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/streaming_llm.html
|
f0161e6baec1-2
|
water, you're my forever
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Bridge:
You're my go-to drink, day or night
You make me feel so light
I'll never give you up, you're my true love
Sparkling water, you're sent from above
Chorus:
Sparkling water, oh how you shine
A taste so clean, it's simply divine
You quench my thirst, you make me feel alive
Sparkling water, you're my favorite vibe
Outro:
Sparkling water, you're the one for me
I'll never let you go, can't you see
You're my drink of choice, forevermore
Sparkling water, I adore.
使用 claude模型Anthropic LLM 示例:
llm = Anthropic(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
llm("Write me a song about sparkling water.")
Sparkling water, bubbles so bright,
Fizzing and popping in the light.
No sugar or calories, a healthy delight,
Sparkling water, refreshing and light.
Carbonation that tickles the tongue,
In flavors of lemon and lime unsung.
Sparkling water, a drink quite all right,
Bubbles sparkling in the light.
'\nSparkling water, bubbles so bright,\n\nFizzing and popping in the light.\n\nNo sugar or calories, a healthy delight,\n\nSparkling water, refreshing and light.\n\nCarbonation that tickles the tongue,\n\nIn flavors of lemon and lime unsung.\n\nSparkling water, a drink quite all right,\n\nBubbles sparkling in the light.'
previous
How to serialize LLM classes
next
How to track token usage
By
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/streaming_llm.html
|
f0161e6baec1-3
|
light.'
previous
How to serialize LLM classes
next
How to track token usage
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/streaming_llm.html
|
0d7165e2153f-0
|
.ipynb
.pdf
How to serialize LLM classes
Contents
Loading
Saving
How to serialize LLM classes#
This notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc).
from langchain.llms import OpenAI
from langchain.llms.loading import load_llm
Loading#
First, lets go over loading an LLM from disk. LLMs can be saved on disk in two formats: json or yaml. No matter the extension, they are loaded in the same way.
!cat llm.json
{
"model_name": "text-davinci-003",
"temperature": 0.7,
"max_tokens": 256,
"top_p": 1.0,
"frequency_penalty": 0.0,
"presence_penalty": 0.0,
"n": 1,
"best_of": 1,
"request_timeout": null,
"_type": "openai"
}
llm = load_llm("llm.json")
!cat llm.yaml
_type: openai
best_of: 1
frequency_penalty: 0.0
max_tokens: 256
model_name: text-davinci-003
n: 1
presence_penalty: 0.0
request_timeout: null
temperature: 0.7
top_p: 1.0
llm = load_llm("llm.yaml")
Saving#
If you want to go from an LLM in memory to a serialized version of it, you can do so easily by calling the .save method. Again, this supports both json and
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_serialization.html
|
0d7165e2153f-1
|
you can do so easily by calling the .save method. Again, this supports both json and yaml.
llm.save("llm.json")
llm.save("llm.yaml")
previous
How to cache LLM calls
next
如何实现 LLM 和 Chat Model 的流式响应
Contents
Loading
Saving
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_serialization.html
|
4bbb011e6dbc-0
|
.ipynb
.pdf
How (and why) to use the fake LLM
How (and why) to use the fake LLM#
We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way.
In this notebook we go over how to use this.
We start this with using the FakeLLM in an agent.
from langchain.llms.fake import FakeListLLM
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
tools = load_tools(["python_repl"])
responses=[
"Action: Python REPL\nAction Input: print(2 + 2)",
"Final Answer: 4"
]
llm = FakeListLLM(responses=responses)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("whats 2 + 2")
> Entering new AgentExecutor chain...
Action: Python REPL
Action Input: print(2 + 2)
Observation: 4
Thought:Final Answer: 4
> Finished chain.
'4'
previous
如何写一个自定义的LLM包装器
next
How to cache LLM calls
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/fake_llm.html
|
0c8b8103bedc-0
|
.ipynb
.pdf
How to cache LLM calls
Contents
In Memory Cache
SQLite Cache
Redis Cache
GPTCache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
How to cache LLM calls#
This notebook covers how to cache results of individual LLM calls.
from langchain.llms import OpenAI
In Memory Cache#
import langchain
from langchain.cache import InMemoryCache
langchain.llm_cache = InMemoryCache()
# To make the caching really obvious, lets use a slower model.
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 14.2 ms, sys: 4.9 ms, total: 19.1 ms
Wall time: 1.1 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 162 µs, sys: 7 µs, total: 169 µs
Wall time: 175 µs
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
SQLite Cache#
!rm .langchain.db
# We can do the same thing with a SQLite cache
from langchain.cache import SQLiteCache
langchain.llm_cache = SQLiteCache(database_path=".langchain.db")
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms
Wall
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
|
0c8b8103bedc-1
|
ms, sys: 9.76 ms, total: 26.7 ms
Wall time: 825 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms
Wall time: 2.67 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Redis Cache#
# We can do the same thing with a Redis cache
# (make sure your local Redis instance is running first before running this example)
from redis import Redis
from langchain.cache import RedisCache
langchain.llm_cache = RedisCache(redis_=Redis())
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
GPTCache#
We can use GPTCache for exact match caching OR to cache results based on semantic similarity
Let’s first start with an example of exact match
import gptcache
from gptcache.processor.pre import get_prompt
from gptcache.manager.factory import get_data_manager
from langchain.cache import GPTCache
# Avoid multiple caches using the same file, causing different llm model caches to affect each other
i = 0
file_prefix = "data_map"
def init_gptcache_map(cache_obj: gptcache.Cache):
global i
cache_path = f'{file_prefix}_{i}.txt'
cache_obj.init(
pre_embedding_func=get_prompt,
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
|
0c8b8103bedc-2
|
pre_embedding_func=get_prompt,
data_manager=get_data_manager(data_path=cache_path),
)
i += 1
langchain.llm_cache = GPTCache(init_gptcache_map)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 8.6 ms, sys: 3.82 ms, total: 12.4 ms
Wall time: 881 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# The second time it is, so it goes faster
llm("Tell me a joke")
CPU times: user 286 µs, sys: 21 µs, total: 307 µs
Wall time: 316 µs
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
Let’s now show an example of similarity caching
import gptcache
from gptcache.processor.pre import get_prompt
from gptcache.manager.factory import get_data_manager
from langchain.cache import GPTCache
from gptcache.manager import get_data_manager, CacheBase, VectorBase
from gptcache import Cache
from gptcache.embedding import Onnx
from gptcache.similarity_evaluation.distance import SearchDistanceEvaluation
# Avoid multiple caches using the same file, causing different llm model caches to affect each other
i = 0
file_prefix = "data_map"
llm_cache = Cache()
def init_gptcache_map(cache_obj: gptcache.Cache):
global i
cache_path = f'{file_prefix}_{i}.txt'
onnx = Onnx()
cache_base = CacheBase('sqlite')
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
|
0c8b8103bedc-3
|
onnx = Onnx()
cache_base = CacheBase('sqlite')
vector_base = VectorBase('faiss', dimension=onnx.dimension)
data_manager = get_data_manager(cache_base, vector_base, max_size=10, clean_size=2)
cache_obj.init(
pre_embedding_func=get_prompt,
embedding_func=onnx.to_embeddings,
data_manager=data_manager,
similarity_evaluation=SearchDistanceEvaluation(),
)
i += 1
langchain.llm_cache = GPTCache(init_gptcache_map)
%%time
# The first time, it is not yet in cache, so it should take longer
llm("Tell me a joke")
CPU times: user 1.01 s, sys: 153 ms, total: 1.16 s
Wall time: 2.49 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# This is an exact match, so it finds it in the cache
llm("Tell me a joke")
CPU times: user 745 ms, sys: 13.2 ms, total: 758 ms
Wall time: 136 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# This is not an exact match, but semantically within distance so it hits!
llm("Tell me joke")
CPU times: user 737 ms, sys: 7.79 ms, total: 745 ms
Wall time: 135 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
SQLAlchemy Cache#
# You can use SQLAlchemyCache to cache with any SQL database supported by
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
|
0c8b8103bedc-4
|
side.'
SQLAlchemy Cache#
# You can use SQLAlchemyCache to cache with any SQL database supported by SQLAlchemy.
# from langchain.cache import SQLAlchemyCache
# from sqlalchemy import create_engine
# engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")
# langchain.llm_cache = SQLAlchemyCache(engine)
Custom SQLAlchemy Schemas#
# You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use:
from sqlalchemy import Column, Integer, String, Computed, Index, Sequence
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy_utils import TSVectorType
from langchain.cache import SQLAlchemyCache
Base = declarative_base()
class FulltextLLMCache(Base): # type: ignore
"""Postgres table for fulltext-indexed LLM Cache"""
__tablename__ = "llm_cache_fulltext"
id = Column(Integer, Sequence('cache_id'), primary_key=True)
prompt = Column(String, nullable=False)
llm = Column(String, nullable=False)
idx = Column(Integer)
response = Column(String)
prompt_tsv = Column(TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True))
__table_args__ = (
Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"),
)
engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")
langchain.llm_cache = SQLAlchemyCache(engine, FulltextLLMCache)
Optional Caching#
You can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
|
0c8b8103bedc-5
|
caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLM
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2, cache=False)
%%time
llm("Tell me a joke")
CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms
Wall time: 745 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
%%time
llm("Tell me a joke")
CPU times: user 4.91 ms, sys: 2.64 ms, total: 7.55 ms
Wall time: 623 ms
'\n\nTwo guys stole a calendar. They got six months each.'
Optional Caching in Chains#
You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards.
As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step.
llm = OpenAI(model_name="text-davinci-002")
no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False)
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
text_splitter = CharacterTextSplitter()
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
texts = text_splitter.split_text(state_of_the_union)
from langchain.docstore.document import Document
docs = [Document(page_content=t) for t in texts[:3]]
from langchain.chains.summarize import
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
|
0c8b8103bedc-6
|
for t in texts[:3]]
from langchain.chains.summarize import load_summarize_chain
chain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm)
%%time
chain.run(docs)
CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms
Wall time: 5.09 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.'
When we run it again, we see that it runs substantially faster but the final answer is different. This is due to caching at the map steps, but not at the reduce step.
%%time
chain.run(docs)
CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms
Wall time: 1.04 s
'\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.'
previous
How (and why) to use the fake LLM
next
How to serialize LLM classes
Contents
In Memory Cache
SQLite Cache
Redis Cache
GPTCache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
|
0c8b8103bedc-7
|
Cache
Redis Cache
GPTCache
SQLAlchemy Cache
Custom SQLAlchemy Schemas
Optional Caching
Optional Caching in Chains
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
|
46253102c963-0
|
.ipynb
.pdf
How to track token usage
How to track token usage#
This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.
Let’s first look at an extremely simple example of tracking token usage for a single LLM call.
from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)
with get_openai_callback() as cb:
result = llm("Tell me a joke")
print(cb)
Tokens Used: 42
Prompt Tokens: 4
Completion Tokens: 38
Successful Requests: 1
Total Cost (USD): $0.00084
Anything inside the context manager will get tracked. Here’s an example of using it to track multiple calls in sequence.
with get_openai_callback() as cb:
result = llm("Tell me a joke")
result2 = llm("Tell me a joke")
print(cb.total_tokens)
91
If a chain or agent with multiple steps in it is used, it will track all those steps.
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
with get_openai_callback() as cb:
response = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
print(f"Total Tokens:
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/token_usage_tracking.html
|
46253102c963-1
|
current age raised to the 0.23 power?")
print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}")
> Entering new AgentExecutor chain...
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.
Action: Search
Action Input: "Olivia Wilde boyfriend"
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought: I need to find out Harry Styles' age.
Action: Search
Action Input: "Harry Styles age"
Observation: 29 years
Thought: I need to calculate 29 raised to the 0.23 power.
Action: Calculator
Action Input: 29^0.23
Observation: Answer: 2.169459462491557
Thought: I now know the final answer.
Final Answer: Harry Styles, Olivia Wilde's boyfriend, is 29 years old and his age raised to the 0.23 power is 2.169459462491557.
> Finished chain.
Total Tokens: 1506
Prompt Tokens: 1350
Completion Tokens: 156
Total Cost (USD): $0.03012
previous
如何实现 LLM 和 Chat Model 的流式响应
next
Integrations
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/token_usage_tracking.html
|
46253102c963-2
|
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/token_usage_tracking.html
|
83eb9b665e77-0
|
.rst
.pdf
Example Selectors
Example Selectors#
Note
Conceptual Guide
If you have a large number of examples, you may need to select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so.
The base interface is defined as below:
class BaseExampleSelector(ABC):
"""Interface for selecting examples to include in prompts."""
@abstractmethod
def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:
"""Select which examples to use based on the inputs."""
The only method it needs to expose is a select_examples method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected. Let’s take a look at some below.
See below for a list of example selectors.
How to create a custom example selector
LengthBased ExampleSelector
Maximal Marginal Relevance ExampleSelector
NGram Overlap ExampleSelector
Similarity ExampleSelector
previous
Chat Prompt Template
next
How to create a custom example selector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors.html
|
ff11f5a98dbc-0
|
.rst
.pdf
Output Parsers
Output Parsers#
Note
Conceptual Guide
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:
get_format_instructions() -> str: A method which returns a string containing instructions for how the output of a language model should be formatted.
parse(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.
And then one optional one:
parse_with_prompt(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
To start, we recommend familiarizing yourself with the Getting Started section
Output Parsers
After that, we provide deep dives on all the different types of output parsers.
CommaSeparatedListOutputParser
OutputFixingParser
PydanticOutputParser
RetryOutputParser
Structured Output Parser
previous
Similarity ExampleSelector
next
Output Parsers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers.html
|
46a5337f2274-0
|
.rst
.pdf
Prompt Templates
Prompt Templates#
Note
Conceptual Guide
Language models take text as input - that text is commonly referred to as a prompt.
Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
LangChain provides several classes and functions to make constructing and working with prompts easy.
The following sections of documentation are provided:
Getting Started: An overview of all the functionality LangChain provides for working with and constructing prompts.
How-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our prompt class.
Reference: API reference documentation for all prompt classes.
previous
Prompts
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates.html
|
8ff445c9d6a7-0
|
.ipynb
.pdf
Chat Prompt Template
Chat Prompt Template#
Chat Models takes a list of chat messages as input - this list commonly referred to as a prompt.
Typically this is not simply a hardcoded list of messages but rather a combination of a template, some examples, and user input.
LangChain provides several classes and functions to make constructing and working with prompts easy.
from langchain.prompts import (
ChatPromptTemplate,
PromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted messages
chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()
[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}),
HumanMessage(content='I love programming.', additional_kwargs={})]
If you wanted to construct the MessagePromptTemplate more directly, you could create a
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/chat_prompt_template.html
|
8ff445c9d6a7-1
|
you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg:
prompt=PromptTemplate(
template="You are a helpful assistant that translates {input_language} to {output_language}.",
input_variables=["input_language", "output_language"],
)
system_message_prompt = SystemMessagePromptTemplate(prompt=prompt)
previous
Example Selector
next
Example Selectors
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/chat_prompt_template.html
|
0e5f54ad4fa6-0
|
.ipynb
.pdf
Similarity ExampleSelector
Similarity ExampleSelector#
The SemanticSimilarityExampleSelector selects examples based on which examples are most similar to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.
from langchain.prompts.example_selector import SemanticSimilarityExampleSelector
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_selector = SemanticSimilarityExampleSelector.from_examples(
# This is the list of examples available to select from.
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma,
# This is the number of examples to produce.
k=1
)
similar_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/similarity.html
|
0e5f54ad4fa6-1
|
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
# Input is a feeling, so should select the happy/sad example
print(similar_prompt.format(adjective="worried"))
Give the antonym of every input
Input: happy
Output: sad
Input: worried
Output:
# Input is a measurement, so should select the tall/short example
print(similar_prompt.format(adjective="fat"))
Give the antonym of every input
Input: happy
Output: sad
Input: fat
Output:
# You can add new examples to the SemanticSimilarityExampleSelector as well
similar_prompt.example_selector.add_example({"input": "enthusiastic", "output": "apathetic"})
print(similar_prompt.format(adjective="joyful"))
Give the antonym of every input
Input: happy
Output: sad
Input: joyful
Output:
previous
NGram Overlap ExampleSelector
next
Output Parsers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/similarity.html
|
7baaaefc9c55-0
|
.ipynb
.pdf
Maximal Marginal Relevance ExampleSelector
Maximal Marginal Relevance ExampleSelector#
The MaxMarginalRelevanceExampleSelector selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples.
from langchain.prompts.example_selector import MaxMarginalRelevanceExampleSelector
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_selector = MaxMarginalRelevanceExampleSelector.from_examples(
# This is the list of examples available to select from.
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
FAISS,
# This is the number of examples to produce.
k=2
)
mmr_prompt
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/mmr.html
|
7baaaefc9c55-1
|
This is the number of examples to produce.
k=2
)
mmr_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
# Input is a feeling, so should select the happy/sad example as the first one
print(mmr_prompt.format(adjective="worried"))
Give the antonym of every input
Input: happy
Output: sad
Input: windy
Output: calm
Input: worried
Output:
# Let's compare this to what we would just get if we went solely off of similarity
similar_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
similar_prompt.example_selector.k = 2
print(similar_prompt.format(adjective="worried"))
Give the antonym of every input
Input: happy
Output: sad
Input: windy
Output: calm
Input: worried
Output:
previous
LengthBased ExampleSelector
next
NGram Overlap ExampleSelector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/mmr.html
|
99b38fa2faf2-0
|
.ipynb
.pdf
LengthBased ExampleSelector
LengthBased ExampleSelector#
This ExampleSelector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.
from langchain.prompts import PromptTemplate
from langchain.prompts import FewShotPromptTemplate
from langchain.prompts.example_selector import LengthBasedExampleSelector
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
example_selector = LengthBasedExampleSelector(
# These are the examples it has available to choose from.
examples=examples,
# This is the PromptTemplate being used to format the examples.
example_prompt=example_prompt,
# This is the maximum length that the formatted examples should be.
# Length is measured by the get_text_length function below.
max_length=25,
# This is the function used to get the length of a string, which is used
# to determine which examples to include. It is commented out because
# it is provided as a default value if none is specified.
#
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/length_based.html
|
99b38fa2faf2-1
|
# it is provided as a default value if none is specified.
# get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x))
)
dynamic_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
# An example with small input, so it selects all examples.
print(dynamic_prompt.format(adjective="big"))
Give the antonym of every input
Input: happy
Output: sad
Input: tall
Output: short
Input: energetic
Output: lethargic
Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: big
Output:
# An example with long input, so it selects only one example.
long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"
print(dynamic_prompt.format(adjective=long_string))
Give the antonym of every input
Input: happy
Output: sad
Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else
Output:
# You can add an example to an example selector as well.
new_example = {"input": "big", "output": "small"}
dynamic_prompt.example_selector.add_example(new_example)
print(dynamic_prompt.format(adjective="enthusiastic"))
Give the antonym of every input
Input: happy
Output: sad
Input: tall
Output: short
Input: energetic
Output: lethargic
Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: big
Output:
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/length_based.html
|
99b38fa2faf2-2
|
gloomy
Input: windy
Output: calm
Input: big
Output: small
Input: enthusiastic
Output:
previous
How to create a custom example selector
next
Maximal Marginal Relevance ExampleSelector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/length_based.html
|
6fa2d45fa87a-0
|
.ipynb
.pdf
NGram Overlap ExampleSelector
NGram Overlap ExampleSelector#
The NGramOverlapExampleSelector selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive.
The selector allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input.
from langchain.prompts import PromptTemplate
from langchain.prompts.example_selector.ngram_overlap import NGramOverlapExampleSelector
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# These are a lot of examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
# These are examples of a fictional translation task.
examples = [
{"input": "See Spot run.", "output": "Ver correr a Spot."},
{"input": "My dog barks.", "output": "Mi perro ladra."},
{"input": "Spot can run.", "output": "Spot puede
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
|
6fa2d45fa87a-1
|
ladra."},
{"input": "Spot can run.", "output": "Spot puede correr."},
]
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
example_selector = NGramOverlapExampleSelector(
# These are the examples it has available to choose from.
examples=examples,
# This is the PromptTemplate being used to format the examples.
example_prompt=example_prompt,
# This is the threshold, at which selector stops.
# It is set to -1.0 by default.
threshold=-1.0,
# For negative threshold:
# Selector sorts examples by ngram overlap score, and excludes none.
# For threshold greater than 1.0:
# Selector excludes all examples, and returns an empty list.
# For threshold equal to 0.0:
# Selector sorts examples by ngram overlap score,
# and excludes those with no ngram overlap with input.
)
dynamic_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the Spanish translation of every input",
suffix="Input: {sentence}\nOutput:",
input_variables=["sentence"],
)
# An example input with large ngram overlap with "Spot can run."
# and no overlap with "My dog barks."
print(dynamic_prompt.format(sentence="Spot can run fast."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: My
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
|
6fa2d45fa87a-2
|
correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: My dog barks.
Output: Mi perro ladra.
Input: Spot can run fast.
Output:
# You can add examples to NGramOverlapExampleSelector as well.
new_example = {"input": "Spot plays fetch.", "output": "Spot juega a buscar."}
example_selector.add_example(new_example)
print(dynamic_prompt.format(sentence="Spot can run fast."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: My dog barks.
Output: Mi perro ladra.
Input: Spot can run fast.
Output:
# You can set a threshold at which examples are excluded.
# For example, setting threshold equal to 0.0
# excludes examples with no ngram overlaps with input.
# Since "My dog barks." has no ngram overlaps with "Spot can run fast."
# it is excluded.
example_selector.threshold=0.0
print(dynamic_prompt.format(sentence="Spot can run fast."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: Spot can run fast.
Output:
# Setting small nonzero threshold
example_selector.threshold=0.09
print(dynamic_prompt.format(sentence="Spot can play fetch."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: Spot can play fetch.
Output:
# Setting threshold greater than
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
|
6fa2d45fa87a-3
|
Spot juega a buscar.
Input: Spot can play fetch.
Output:
# Setting threshold greater than 1.0
example_selector.threshold=1.0+1e-9
print(dynamic_prompt.format(sentence="Spot can play fetch."))
Give the Spanish translation of every input
Input: Spot can play fetch.
Output:
previous
Maximal Marginal Relevance ExampleSelector
next
Similarity ExampleSelector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
|
9fe7a3ea6f9d-0
|
.md
.pdf
How to create a custom example selector
Contents
Implement custom example selector
Use custom example selector
How to create a custom example selector#
In this tutorial, we’ll create a custom example selector that selects every alternate example from a given list of examples.
An ExampleSelector must implement two methods:
An add_example method which takes in an example and adds it into the ExampleSelector
A select_examples method which takes in input variables (which are meant to be user input) and returns a list of examples to use in the few shot prompt.
Let’s implement a custom ExampleSelector that just selects two examples at random.
Note
Take a look at the current set of example selector implementations supported in LangChain here.
Implement custom example selector#
from langchain.prompts.example_selector.base import BaseExampleSelector
from typing import Dict, List
import numpy as np
class CustomExampleSelector(BaseExampleSelector):
def __init__(self, examples: List[Dict[str, str]]):
self.examples = examples
def add_example(self, example: Dict[str, str]) -> None:
"""Add new example to store for a key."""
self.examples.append(example)
def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:
"""Select which examples to use based on the inputs."""
return np.random.choice(self.examples, size=2, replace=False)
Use custom example selector#
examples = [
{"foo": "1"},
{"foo": "2"},
{"foo": "3"}
]
# Initialize example selector.
example_selector = CustomExampleSelector(examples)
# Select examples
example_selector.select_examples({"foo": "foo"})
# ->
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html
|
9fe7a3ea6f9d-1
|
Select examples
example_selector.select_examples({"foo": "foo"})
# -> array([{'foo': '2'}, {'foo': '3'}], dtype=object)
# Add new example to the set of examples
example_selector.add_example({"foo": "4"})
example_selector.examples
# -> [{'foo': '1'}, {'foo': '2'}, {'foo': '3'}, {'foo': '4'}]
# Select examples
example_selector.select_examples({"foo": "foo"})
# -> array([{'foo': '1'}, {'foo': '4'}], dtype=object)
previous
Example Selectors
next
LengthBased ExampleSelector
Contents
Implement custom example selector
Use custom example selector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html
|
cdd9ced1eea9-0
|
.ipynb
.pdf
Output Parsers
Output Parsers#
Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in.
Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:
get_format_instructions() -> str: A method which returns a string containing instructions for how the output of a language model should be formatted.
parse(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.
And then one optional one:
parse_with_prompt(str) -> Any: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
Below we go over the main type of output parser, the PydanticOutputParser. See the examples folder for other options.
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
model_name = 'text-davinci-003'
temperature = 0.0
model = OpenAI(model_name=model_name, temperature=temperature)
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
# You can add custom validation logic easily with
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/getting_started.html
|
cdd9ced1eea9-1
|
resolve the joke")
# You can add custom validation logic easily with Pydantic.
@validator('setup')
def question_ends_with_question_mark(cls, field):
if field[-1] != '?':
raise ValueError("Badly formed question!")
return field
# Set up a parser + inject instructions into the prompt template.
parser = PydanticOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
# And a query intented to prompt a language model to populate the data structure.
joke_query = "Tell me a joke."
_input = prompt.format_prompt(query=joke_query)
output = model(_input.to_string())
parser.parse(output)
Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')
previous
Output Parsers
next
CommaSeparatedListOutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/getting_started.html
|
1cd857bb4421-0
|
.ipynb
.pdf
RetryOutputParser
RetryOutputParser#
While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it can’t. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example.
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser, OutputFixingParser, RetryOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
template = """Based on the user question, provide an Action and Action Input for what step should be taken.
{format_instructions}
Question: {query}
Response:"""
class Action(BaseModel):
action: str = Field(description="action to take")
action_input: str = Field(description="input to the action")
parser = PydanticOutputParser(pydantic_object=Action)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
prompt_value = prompt.format_prompt(query="who is leo di caprios gf?")
bad_response = '{"action": "search"}'
If we try to parse this response as is, we will get an error
parser.parse(bad_response)
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:24, in PydanticOutputParser.parse(self, text)
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/retry.html
|
1cd857bb4421-1
|
in PydanticOutputParser.parse(self, text)
23 json_object = json.loads(json_str)
---> 24 return self.pydantic_object.parse_obj(json_object)
26 except (json.JSONDecodeError, ValidationError) as e:
File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:527, in pydantic.main.BaseModel.parse_obj()
File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:342, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for Action
action_input
field required (type=value_error.missing)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[6], line 1
----> 1 parser.parse(bad_response)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {text}. Got: {e}"
---> 29 raise OutputParserException(msg)
OutputParserException: Failed to parse Action from completion {"action": "search"}. Got: 1 validation error for Action
action_input
field required (type=value_error.missing)
If we try to use the OutputFixingParser to fix this error, it will be confused - namely, it doesn’t know what to actually put for action input.
fix_parser = OutputFixingParser.from_llm(parser=parser,
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/retry.html
|
1cd857bb4421-2
|
actually put for action input.
fix_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())
fix_parser.parse(bad_response)
Action(action='search', action_input='')
Instead, we can use the RetryOutputParser, which passes in the prompt (as well as the original output) to try again to get a better response.
from langchain.output_parsers import RetryWithErrorOutputParser
retry_parser = RetryWithErrorOutputParser.from_llm(parser=parser, llm=OpenAI(temperature=0))
retry_parser.parse_with_prompt(bad_response, prompt_value)
Action(action='search', action_input='who is leo di caprios gf?')
previous
PydanticOutputParser
next
Structured Output Parser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/retry.html
|
2d06dc6c018b-0
|
.ipynb
.pdf
OutputFixingParser
OutputFixingParser#
This output parser wraps another output parser and tries to fix any mmistakes
The Pydantic guardrail simply tries to parse the LLM response. If it does not parse correctly, then it errors.
But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.
For this example, we’ll use the above OutputParser. Here’s what happens if we pass it a result that does not comply with the schema:
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
class Actor(BaseModel):
name: str = Field(description="name of an actor")
film_names: List[str] = Field(description="list of names of films they starred in")
actor_query = "Generate the filmography for a random actor."
parser = PydanticOutputParser(pydantic_object=Actor)
misformatted = "{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}"
parser.parse(misformatted)
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:23, in PydanticOutputParser.parse(self, text)
22 json_str = match.group()
---> 23 json_object = json.loads(json_str)
24 return
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
|
2d06dc6c018b-1
|
23 json_object = json.loads(json_str)
24 return self.pydantic_object.parse_obj(json_object)
File ~/.pyenv/versions/3.9.1/lib/python3.9/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
343 if (cls is None and object_hook is None and
344 parse_int is None and parse_float is None and
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
333 """Return the Python representation of ``s`` (a ``str`` instance
334 containing a JSON document).
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
File ~/.pyenv/versions/3.9.1/lib/python3.9/json/decoder.py:353, in JSONDecoder.raw_decode(self, s, idx)
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
During handling of the above exception, another exception occurred:
OutputParserException
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
|
2d06dc6c018b-2
|
handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[6], line 1
----> 1 parser.parse(misformatted)
File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {text}. Got: {e}"
---> 29 raise OutputParserException(msg)
OutputParserException: Failed to parse Actor from completion {'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}. Got: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
Now we can construct and use a OutputFixingParser. This output parser takes as an argument another output parser but also an LLM with which to try to correct any formatting mistakes.
from langchain.output_parsers import OutputFixingParser
new_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI())
new_parser.parse(misformatted)
Actor(name='Tom Hanks', film_names=['Forrest Gump'])
previous
CommaSeparatedListOutputParser
next
PydanticOutputParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
|
6cb4cbfb28f1-0
|
.ipynb
.pdf
CommaSeparatedListOutputParser
CommaSeparatedListOutputParser#
Here’s another parser strictly less powerful than Pydantic/JSON parsing.
from langchain.output_parsers import CommaSeparatedListOutputParser
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
output_parser = CommaSeparatedListOutputParser()
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(
template="List five {subject}.\n{format_instructions}",
input_variables=["subject"],
partial_variables={"format_instructions": format_instructions}
)
model = OpenAI(temperature=0)
_input = prompt.format(subject="ice cream flavors")
output = model(_input)
output_parser.parse(output)
['Vanilla',
'Chocolate',
'Strawberry',
'Mint Chocolate Chip',
'Cookies and Cream']
previous
Output Parsers
next
OutputFixingParser
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/comma_separated.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.