id
stringlengths
14
16
text
stringlengths
29
2.31k
source
stringlengths
57
122
0056e1ff154e-1
shows how to use them in a chain. Question answering over documents consists of four steps: Create an index Create a Retriever from that index Create a question answering chain Ask questions! Each of the steps has multiple sub steps and potential configurations. In this notebook we will primarily focus on (1). We will start by showing the one-liner for doing so, but then break down what is actually going on. First, let’s import some common classes we’ll use no matter what. from langchain.chains import RetrievalQA from langchain.llms import OpenAI Next in the generic setup, let’s specify the document loader we want to use. You can download the state_of_the_union.txt file here from langchain.document_loaders import TextLoader loader = TextLoader('../state_of_the_union.txt') One Line Index Creation# To get started as quickly as possible, we can use the VectorstoreIndexCreator. from langchain.indexes import VectorstoreIndexCreator index = VectorstoreIndexCreator().from_loaders([loader]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. Now that the index is created, we can use it to ask questions of the data! Note that under the hood this is actually doing a few steps as well, which we will cover later in this guide. query = "What did the president say about Ketanji Brown Jackson" index.query(query) " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." query = "What did the president say about Ketanji Brown
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/getting_started.html
0056e1ff154e-2
judges appointed by Democrats and Republicans." query = "What did the president say about Ketanji Brown Jackson" index.query_with_sources(query) {'question': 'What did the president say about Ketanji Brown Jackson', 'answer': " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\n", 'sources': '../state_of_the_union.txt'} What is returned from the VectorstoreIndexCreator is VectorStoreIndexWrapper, which provides these nice query and query_with_sources functionality. If we just wanted to access the vectorstore directly, we can also do that. index.vectorstore <langchain.vectorstores.chroma.Chroma at 0x119aa5940> If we then want to access the VectorstoreRetriever, we can do that with: index.vectorstore.as_retriever() VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x119aa5940>, search_kwargs={}) Walkthrough# Okay, so what’s actually going on? How is this index getting created? A lot of the magic is being hid in this VectorstoreIndexCreator. What is this doing? There are three main steps going on after the documents are loaded: Splitting documents into chunks Creating embeddings for each document Storing documents and embeddings in a vectorstore Let’s walk through this in code documents = loader.load() Next, we will split the documents into chunks. from langchain.text_splitter import CharacterTextSplitter text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) We will then select which embeddings we want to use. from langchain.embeddings
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/getting_started.html
0056e1ff154e-3
will then select which embeddings we want to use. from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() We now create the vectorstore to use as the index. from langchain.vectorstores import Chroma db = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. So that’s creating the index. Then, we expose this index in a retriever interface. retriever = db.as_retriever() Then, as before, we create a chain and use it to answer questions! qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever) query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " The President said that Judge Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He said she is a consensus builder and has received a broad range of support from organizations such as the Fraternal Order of Police and former judges appointed by Democrats and Republicans." VectorstoreIndexCreator is just a wrapper around all this logic. It is configurable in the text splitter it uses, the embeddings it uses, and the vectorstore it uses. For example, you can configure it as below: index_creator = VectorstoreIndexCreator( vectorstore_cls=Chroma, embedding=OpenAIEmbeddings(), text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) ) Hopefully this highlights what is going on under the hood of VectorstoreIndexCreator. While we think it’s important to have a simple way to create indexes, we also think it’s important to understand what’s going on under the
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/getting_started.html
0056e1ff154e-4
simple way to create indexes, we also think it’s important to understand what’s going on under the hood. previous Indexes next Document Loaders Contents One Line Index Creation Walkthrough By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/getting_started.html
e525c9cbd3da-0
.rst .pdf Text Splitters Text Splitters# Note Conceptual Guide When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What “semantically related” means could depend on the type of text. This notebook showcases several ways to do that. At a high level, text splitters work as following: Split the text up into small, semantically meaningful chunks (often sentences). Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function). Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks). That means there two different axes along which you can customize your text splitter: How the text is split How the chunk size is measured For an introduction to the default text splitter and generic functionality see: Getting Started We also have documentation for all the types of text splitters that are supported. Please see below for that list. Character Text Splitter Hugging Face Length Function Latex Text Splitter Markdown Text Splitter NLTK Text Splitter Python Code Text Splitter RecursiveCharacterTextSplitter Spacy Text Splitter tiktoken (OpenAI) Length Function TiktokenText Splitter previous YouTube next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters.html
a95f2224d200-0
.rst .pdf Document Loaders Document Loaders# Note Conceptual Guide Combining language models with your own text data is a powerful way to differentiate them. The first step in doing this is to load the data into “documents” - a fancy way of say some pieces of text. This module is aimed at making this easy. A primary driver of a lot of this is the Unstructured python package. This package is a great way to transform all types of files - text, powerpoint, images, html, pdf, etc - into text data. For detailed instructions on how to get set up with Unstructured, see installation guidelines here. The following document loaders are provided: CoNLL-U Airbyte JSON Apify Dataset AZLyrics Azure Blob Storage Container Azure Blob Storage File BigQuery Loader Bilibili Blackboard College Confidential Copy Paste CSV Loader DataFrame Loader Directory Loader DuckDB Loader Email EPubs EverNote Facebook Chat Figma GCS Directory GCS File Storage Git GitBook Google Drive Gutenberg Hacker News HTML iFixit Images IMSDb Markdown Notebook Notion Notion DB Loader Obsidian PDF PowerPoint ReadTheDocs Documentation Roam s3 Directory s3 File Sitemap Loader Slack (Local Exported Zipfile) Subtitle Files Telegram Unstructured File Loader URL Selenium URL Loader Playwright URL Loader Web Base WhatsApp Chat Word Documents YouTube previous Getting Started next CoNLL-U By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders.html
ca45d31805f7-0
.ipynb .pdf Pinecone Hybrid Search Contents Setup Pinecone Get embeddings and sparse encoders Load Retriever Add texts (if necessary) Use Retriever Pinecone Hybrid Search# This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search. The logic of this retriever is taken from this documentaion from langchain.retrievers import PineconeHybridSearchRetriever Setup Pinecone# You should only have to do this part once. Note: it’s important to make sure that the “context” field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone’s docs. import os import pinecone api_key = os.getenv("PINECONE_API_KEY") or "PINECONE_API_KEY" # find environment next to your API key in the Pinecone console env = os.getenv("PINECONE_ENVIRONMENT") or "PINECONE_ENVIRONMENT" index_name = "langchain-pinecone-hybrid-search" pinecone.init(api_key=api_key, enviroment=env) pinecone.whoami() WhoAmIResponse(username='load', user_label='label', projectname='load-test') # create the index pinecone.create_index( name = index_name, dimension = 1536, # dimensionality of dense model metric = "dotproduct", # sparse values supported only for dotproduct pod_type = "s1", metadata_config={"indexed": []} # see explaination above ) Now that its created, we can use it index = pinecone.Index(index_name) Get embeddings and sparse encoders# Embeddings are used for the dense vectors, tokenizer is used for the sparse vector from
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html
ca45d31805f7-1
are used for the dense vectors, tokenizer is used for the sparse vector from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25. For more information about the sparse encoders you can checkout pinecone-text library docs. from pinecone_text.sparse import BM25Encoder # or from pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE # use default tf-idf values bm25_encoder = BM25Encoder().default() The above code is using default tfids values. It’s highly recommended to fit the tf-idf values to your own corpus. You can do it as follow: corpus = ["foo", "bar", "world", "hello"] # fit tf-idf values on your corpus bm25_encoder.fit(corpus) # store the values to a json file bm25_encoder.dump("bm25_values.json") # load to your BM25Encoder object bm25_encoder = BM25Encoder().load("bm25_values.json") Load Retriever# We can now construct the retriever! retriever = PineconeHybridSearchRetriever(embeddings=embeddings, sparse_encoder=bm25_encoder, index=index) Add texts (if necessary)# We can optionally add texts to the retriever (if they aren’t already in there) retriever.add_texts(["foo", "bar", "world", "hello"]) 100%|██████████| 1/1 [00:02<00:00, 2.27s/it] Use Retriever# We can now use the retriever! result = retriever.get_relevant_documents("foo") result[0] Document(page_content='foo', metadata={}) previous Metal next TF-IDF Retriever Contents
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html
ca45d31805f7-2
metadata={}) previous Metal next TF-IDF Retriever Contents Setup Pinecone Get embeddings and sparse encoders Load Retriever Add texts (if necessary) Use Retriever By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html
01b307cc5e3b-0
.ipynb .pdf ChatGPT Plugin Retriever Contents Create Using the ChatGPT Retriever Plugin ChatGPT Plugin Retriever# This notebook shows how to use the ChatGPT Retriever Plugin within LangChain. Create# First, let’s go over how to create the ChatGPT Retriever Plugin. To set up the ChatGPT Retriever Plugin, please follow instructions here. You can also create the ChatGPT Retriever Plugin from LangChain document loaders. The below code walks through how to do that. # STEP 1: Load # Load documents using LangChain's DocumentLoaders # This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.html from langchain.document_loaders.csv_loader import CSVLoader loader = CSVLoader(file_path='../../document_loaders/examples/example_data/mlb_teams_2012.csv') data = loader.load() # STEP 2: Convert # Convert Document to format expected by https://github.com/openai/chatgpt-retrieval-plugin from typing import List from langchain.docstore.document import Document import json def write_json(path: str, documents: List[Document])-> None: results = [{"text": doc.page_content} for doc in documents] with open(path, "w") as f: json.dump(results, f, indent=2) write_json("foo.json", data) # STEP 3: Use # Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_json Using the ChatGPT Retriever Plugin# Okay, so we’ve created the ChatGPT Retriever Plugin, but how do we actually use it? The below code walks through how to do that. from langchain.retrievers
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin-retriever.html
01b307cc5e3b-1
actually use it? The below code walks through how to do that. from langchain.retrievers import ChatGPTPluginRetriever retriever = ChatGPTPluginRetriever(url="http://0.0.0.0:8000", bearer_token="foo") retriever.get_relevant_documents("alice's phone number") [Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata={'id': '456_0', 'metadata': {'source': 'email', 'source_id': '567', 'url': None, 'created_at': '1609592400.0', 'author': 'Alice', 'document_id': '456'}, 'embedding': None, 'score': 0.925571561}, lookup_index=0), Document(page_content='This is a document about something', lookup_str='', metadata={'id': '123_0', 'metadata': {'source': 'file', 'source_id': 'https://example.com/doc1', 'url': 'https://example.com/doc1', 'created_at': '1609502400.0', 'author': 'Alice', 'document_id': '123'}, 'embedding': None, 'score': 0.6987589}, lookup_index=0), Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': None, 'score':
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin-retriever.html
01b307cc5e3b-2
'embedding': None, 'score': 0.697888613}, lookup_index=0)] previous Retrievers next Databerry Contents Create Using the ChatGPT Retriever Plugin By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin-retriever.html
fd6bc3446c80-0
.ipynb .pdf ElasticSearch BM25 Contents Create New Retriever Add texts (if necessary) Use Retriever ElasticSearch BM25# This notebook goes over how to use a retriever that under the hood uses ElasticSearcha and BM25. For more information on the details of BM25 see this blog post. from langchain.retrievers import ElasticSearchBM25Retriever Create New Retriever# elasticsearch_url="http://localhost:9200" retriever = ElasticSearchBM25Retriever.create(elasticsearch_url, "langchain-index-4") # Alternatively, you can load an existing index # import elasticsearch # elasticsearch_url="http://localhost:9200" # retriever = ElasticSearchBM25Retriever(elasticsearch.Elasticsearch(elasticsearch_url), "langchain-index") Add texts (if necessary)# We can optionally add texts to the retriever (if they aren’t already in there) retriever.add_texts(["foo", "bar", "world", "hello", "foo bar"]) ['cbd4cb47-8d9f-4f34-b80e-ea871bc49856', 'f3bd2e24-76d1-4f9b-826b-ec4c0e8c7365', '8631bfc8-7c12-48ee-ab56-8ad5f373676e', '8be8374c-3253-4d87-928d-d73550a2ecf0', 'd79f457b-2842-4eab-ae10-77aa420b53d7'] Use Retriever# We can now use the retriever! result = retriever.get_relevant_documents("foo") result [Document(page_content='foo', metadata={}), Document(page_content='foo bar',
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html
fd6bc3446c80-1
metadata={}), Document(page_content='foo bar', metadata={})] previous Databerry next Metal Contents Create New Retriever Add texts (if necessary) Use Retriever By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html
b6600b860b6a-0
.ipynb .pdf TF-IDF Retriever Contents Create New Retriever with Texts Use Retriever TF-IDF Retriever# This notebook goes over how to use a retriever that under the hood uses TF-IDF using scikit-learn. For more information on the details of TF-IDF see this blog post. from langchain.retrievers import TFIDFRetriever # !pip install scikit-learn Create New Retriever with Texts# retriever = TFIDFRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"]) Use Retriever# We can now use the retriever! result = retriever.get_relevant_documents("foo") result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})] previous Pinecone Hybrid Search next VectorStore Retriever Contents Create New Retriever with Texts Use Retriever By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/tf_idf_retriever.html
45908fcaf7f5-0
.ipynb .pdf Databerry Contents Query Databerry# This notebook shows how to use Databerry’s retriever. First, you will need to sign up for Databerry, create a datastore, add some data and get your datastore api endpoint url Query# Now that our index is set up, we can set up a retriever and start querying it. from langchain.retrievers import DataberryRetriever retriever = DataberryRetriever( datastore_url="https://clg1xg2h80000l708dymr0fxc.databerry.ai/query", # api_key="DATABERRY_API_KEY", # optional if datastore is public # top_k=10 # optional ) retriever.get_relevant_documents("What is Daftpage?") [Document(page_content='✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}), Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/databerry.html
45908fcaf7f5-1
to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at [email protected] the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}), Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at [email protected] the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source':
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/databerry.html
45908fcaf7f5-2
startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})] previous ChatGPT Plugin Retriever next ElasticSearch BM25 Contents Query By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/databerry.html
133b9b2a2ab6-0
.ipynb .pdf Weaviate Hybrid Search Weaviate Hybrid Search# This notebook shows how to use Weaviate hybrid search as a LangChain retriever. import weaviate import os WEAVIATE_URL = "..." client = weaviate.Client( url=WEAVIATE_URL, ) from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever from langchain.schema import Document retriever = WeaviateHybridSearchRetriever(client, index_name="LangChain", text_key="text") docs = [Document(page_content="foo")] retriever.add_documents(docs) ['3f79d151-fb84-44cf-85e0-8682bfe145e0'] retriever.get_relevant_documents("foo") [Document(page_content='foo', metadata={})] previous VectorStore Retriever next Memory By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html
7639dfdd653d-0
.ipynb .pdf Metal Contents Ingest Documents Query Metal# This notebook shows how to use Metal’s retriever. First, you will need to sign up for Metal and get an API key. You can do so here # !pip install metal_sdk from metal_sdk.metal import Metal API_KEY = "" CLIENT_ID = "" APP_ID = "" metal = Metal(API_KEY, CLIENT_ID, APP_ID); Ingest Documents# You only need to do this if you haven’t already set up an index metal.index( {"text": "foo1"}) metal.index( {"text": "foo"}) {'data': {'id': '642739aa7559b026b4430e42', 'text': 'foo', 'createdAt': '2023-03-31T19:51:06.748Z'}} Query# Now that our index is set up, we can set up a retriever and start querying it. from langchain.retrievers import MetalRetriever retriever = MetalRetriever(metal, params={"limit": 2}) retriever.get_relevant_documents("foo1") [Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}), Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})] previous ElasticSearch BM25 next Pinecone Hybrid Search Contents Ingest Documents Query By Harrison Chase
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/metal.html
7639dfdd653d-1
Ingest Documents Query By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/metal.html
bca5df2401b9-0
.ipynb .pdf VectorStore Retriever VectorStore Retriever# The index - and therefore the retriever - that LangChain has the most support for is a VectorStoreRetriever. As the name suggests, this retriever is backed heavily by a VectorStore. Once you construct a VectorStore, its very easy to construct a retriever. Let’s walk through an example. from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import FAISS from langchain.embeddings import OpenAIEmbeddings documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() db = FAISS.from_documents(texts, embeddings) Exiting: Cleaning up .chroma directory retriever = db.as_retriever() docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson") By default, the vectorstore retriever uses similarity search. If the underlying vectorstore support maximum marginal relevance search, you can specify that as the search type. retriever = db.as_retriever(search_type="mmr") docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson") You can also specify search kwargs like k to use when doing retrieval. retriever = db.as_retriever(search_kwargs={"k": 1}) docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson") len(docs) 1 previous TF-IDF Retriever next Weaviate Hybrid Search By Harrison Chase © Copyright 2023, Harrison Chase.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/vectorstore-retriever.html
bca5df2401b9-1
© Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/retrievers/examples/vectorstore-retriever.html
6098b6c3e5b1-0
.ipynb .pdf Getting Started Getting Started# The default recommended text splitter is the RecursiveCharacterTextSplitter. This text splitter takes a list of characters. It tries to create chunks based on splitting on the first character, but if any chunks are too large it then moves onto the next character, and so forth. By default the characters it tries to split on are ["\n\n", "\n", " ", ""] In addition to controlling which characters you can split on, you can also control a few other things: length_function: how the length of chunks is calculated. Defaults to just counting number of characters, but it’s pretty common to pass a token counter here. chunk_size: the maximum size of your chunks (as measured by the length function). chunk_overlap: the maximum overlap between chunks. It can be nice to have some overlap to maintain some continuity between chunks (eg do a sliding window). # This is a long document we can split up. with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import RecursiveCharacterTextSplitter text_splitter = RecursiveCharacterTextSplitter( # Set a really small chunk size, just to show. chunk_size = 100, chunk_overlap = 20, length_function = len, ) texts = text_splitter.create_documents([state_of_the_union]) print(texts[0]) print(texts[1]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' lookup_str='' metadata={} lookup_index=0 page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={} lookup_index=0 previous Text Splitters next Character Text Splitter By Harrison Chase
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/getting_started.html
6098b6c3e5b1-1
Text Splitter By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/getting_started.html
b24a02d1e80a-0
.ipynb .pdf Python Code Text Splitter Python Code Text Splitter# PythonCodeTextSplitter splits text along python class and method definitions. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Python-specific separators. See the source code to see the Python syntax expected by default. How the text is split: by list of python specific characters How the chunk size is measured: by length function passed in (defaults to number of characters) from langchain.text_splitter import PythonCodeTextSplitter python_text = """ class Foo: def bar(): def foo(): def testing_func(): def bar(): """ python_splitter = PythonCodeTextSplitter(chunk_size=30, chunk_overlap=0) docs = python_splitter.create_documents([python_text]) docs [Document(page_content='Foo:\n\n def bar():', lookup_str='', metadata={}, lookup_index=0), Document(page_content='foo():\n\ndef testing_func():', lookup_str='', metadata={}, lookup_index=0), Document(page_content='bar():', lookup_str='', metadata={}, lookup_index=0)] previous NLTK Text Splitter next RecursiveCharacterTextSplitter By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/examples/python.html
61c034bc2184-0
.ipynb .pdf tiktoken (OpenAI) Length Function tiktoken (OpenAI) Length Function# You can also use tiktoken, a open source tokenizer package from OpenAI to estimate tokens used. Will probably be more accurate for their models. How the text is split: by character passed in How the chunk size is measured: by tiktoken tokenizer # This is a long document we can split up. with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import CharacterTextSplitter text_splitter = CharacterTextSplitter.from_tiktoken_encoder(chunk_size=100, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. previous Spacy Text Splitter next TiktokenText Splitter By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/examples/tiktoken.html
850a3ebf507f-0
.ipynb .pdf Latex Text Splitter Latex Text Splitter# LatexTextSplitter splits text along Latex headings, headlines, enumerations and more. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Latex-specific separators. See the source code to see the Latex syntax expected by default. How the text is split: by list of latex specific tags How the chunk size is measured: by length function passed in (defaults to number of characters) from langchain.text_splitter import LatexTextSplitter latex_text = """ \documentclass{article} \begin{document} \maketitle \section{Introduction} Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis. \subsection{History of LLMs} The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance. \subsection{Applications of LLMs} LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics. \end{document} """ latex_splitter = LatexTextSplitter(chunk_size=400, chunk_overlap=0) docs = latex_splitter.create_documents([latex_text]) docs [Document(page_content='\\documentclass{article}\n\n\x08egin{document}\n\n\\maketitle', lookup_str='', metadata={},
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/examples/latex.html
850a3ebf507f-1
lookup_str='', metadata={}, lookup_index=0), Document(page_content='Introduction}\nLarge language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.', lookup_str='', metadata={}, lookup_index=0), Document(page_content='History of LLMs}\nThe earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.', lookup_str='', metadata={}, lookup_index=0), Document(page_content='Applications of LLMs}\nLLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.\n\n\\end{document}', lookup_str='', metadata={}, lookup_index=0)] previous Hugging Face Length Function next Markdown Text Splitter By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/examples/latex.html
befd97e3fe64-0
.ipynb .pdf Spacy Text Splitter Spacy Text Splitter# Another alternative to NLTK is to use Spacy. How the text is split: by Spacy How the chunk size is measured: by length function passed in (defaults to number of characters) # This is a long document we can split up. with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import SpacyTextSplitter text_splitter = SpacyTextSplitter(chunk_size=1000) texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. previous RecursiveCharacterTextSplitter next tiktoken (OpenAI) Length Function By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/examples/spacy.html
0ca9f3510acd-0
.ipynb .pdf TiktokenText Splitter TiktokenText Splitter# How the text is split: by tiktoken tokens How the chunk size is measured: by tiktoken tokens # This is a long document we can split up. with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import TokenTextSplitter text_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our previous tiktoken (OpenAI) Length Function next Vectorstores By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/examples/tiktoken_splitter.html
3bfbfd9677b0-0
.ipynb .pdf RecursiveCharacterTextSplitter RecursiveCharacterTextSplitter# This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text. How the text is split: by list of characters How the chunk size is measured: by length function passed in (defaults to number of characters) # This is a long document we can split up. with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import RecursiveCharacterTextSplitter text_splitter = RecursiveCharacterTextSplitter( # Set a really small chunk size, just to show. chunk_size = 100, chunk_overlap = 20, length_function = len, ) texts = text_splitter.create_documents([state_of_the_union]) print(texts[0]) print(texts[1]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' lookup_str='' metadata={} lookup_index=0 page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={} lookup_index=0 previous Python Code Text Splitter next Spacy Text Splitter By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/examples/recursive_text_splitter.html
2ad426a2f2f3-0
.ipynb .pdf Markdown Text Splitter Markdown Text Splitter# MarkdownTextSplitter splits text along Markdown headings, code blocks, or horizontal rules. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Markdown-specific separators. See the source code to see the Markdown syntax expected by default. How the text is split: by list of markdown specific characters How the chunk size is measured: by length function passed in (defaults to number of characters) from langchain.text_splitter import MarkdownTextSplitter markdown_text = """ # 🦜️🔗 LangChain ⚡ Building applications with LLMs through composability ⚡ ## Quick Install ```bash # Hopefully this code block isn't split pip install langchain ``` As an open source project in a rapidly developing field, we are extremely open to contributions. """ markdown_splitter = MarkdownTextSplitter(chunk_size=100, chunk_overlap=0) docs = markdown_splitter.create_documents([markdown_text]) docs [Document(page_content='# 🦜️🔗 LangChain\n\n⚡ Building applications with LLMs through composability ⚡', lookup_str='', metadata={}, lookup_index=0), Document(page_content="Quick Install\n\n```bash\n# Hopefully this code block isn't split\npip install langchain", lookup_str='', metadata={}, lookup_index=0), Document(page_content='As an open source project in a rapidly developing field, we are extremely open to contributions.', lookup_str='', metadata={}, lookup_index=0)] previous Latex Text Splitter next NLTK Text Splitter By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/examples/markdown.html
41a8d8b9fe8f-0
.ipynb .pdf Character Text Splitter Character Text Splitter# This is a more simple method. This splits based on characters (by default “\n\n”) and measure chunk length by number of characters. How the text is split: by single character How the chunk size is measured: by length function passed in (defaults to number of characters) # This is a long document we can split up. with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import CharacterTextSplitter text_splitter = CharacterTextSplitter( separator = "\n\n", chunk_size = 1000, chunk_overlap = 200, length_function = len, ) texts = text_splitter.create_documents([state_of_the_union]) print(texts[0]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html
41a8d8b9fe8f-1
met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={} lookup_index=0 Here’s an example of passing metadata along with the documents, notice that it is split along with the documents. metadatas = [{"document": 1}, {"document": 2}] documents = text_splitter.create_documents([state_of_the_union, state_of_the_union], metadatas=metadatas) print(documents[0]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={'document': 1} lookup_index=0 previous Getting Started next Hugging Face Length Function By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html
b66b242d1b5a-0
.ipynb .pdf Hugging Face Length Function Hugging Face Length Function# Most LLMs are constrained by the number of tokens that you can pass in, which is not the same as the number of characters. In order to get a more accurate estimate, we can use Hugging Face tokenizers to count the text length. How the text is split: by character passed in How the chunk size is measured: by Hugging Face tokenizer from transformers import GPT2TokenizerFast tokenizer = GPT2TokenizerFast.from_pretrained("gpt2") # This is a long document we can split up. with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import CharacterTextSplitter text_splitter = CharacterTextSplitter.from_huggingface_tokenizer(tokenizer, chunk_size=100, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. previous Character Text Splitter next Latex Text Splitter By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/examples/huggingface_length_function.html
9766011b8dca-0
.ipynb .pdf NLTK Text Splitter NLTK Text Splitter# Rather than just splitting on “\n\n”, we can use NLTK to split based on tokenizers. How the text is split: by NLTK How the chunk size is measured: by length function passed in (defaults to number of characters) # This is a long document we can split up. with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import NLTKTextSplitter text_splitter = NLTKTextSplitter(chunk_size=1000) texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. previous Markdown Text Splitter next Python Code Text Splitter By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18,
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/examples/nltk.html
9766011b8dca-1
Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/text_splitters/examples/nltk.html
2fc88fe423d6-0
.ipynb .pdf Getting Started Contents Add texts From Documents Getting Started# This notebook showcases basic functionality related to VectorStores. A key part of working with vectorstores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the embedding notebook before diving into this. This covers generic high level functionality related to all vector stores. For guides on specific vectorstores, please see the how-to guides here from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Chroma with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. print(docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/getting_started.html
2fc88fe423d6-1
Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Add texts# You can easily add text to a vectorstore with the add_texts method. It will return a list of document IDs (in case you need to use them downstream). docsearch.add_texts(["Ankush went to Princeton"]) ['a05e3d0c-ab40-11ed-a853-e65801318981'] query = "Where did Ankush go to college?" docs = docsearch.similarity_search(query) docs[0] Document(page_content='Ankush went to Princeton', lookup_str='', metadata={}, lookup_index=0) From Documents# We can also initialize a vectorstore from documents directly. This is useful when we use the method on the text splitter to get documents directly (handy when the original documents have associated metadata). documents = text_splitter.create_documents([state_of_the_union], metadatas=[{"source": "State of the Union"}]) docsearch = Chroma.from_documents(documents, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. print(docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/getting_started.html
2fc88fe423d6-2
Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. previous Vectorstores next AtlasDB Contents Add texts From Documents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/getting_started.html
6b31a89acb12-0
.ipynb .pdf FAISS Contents Similarity Search with score Saving and loading Merging FAISS# This notebook shows how to use functionality related to the FAISS vector database. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import FAISS from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() db = FAISS.from_documents(docs, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = db.similarity_search(query) print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Similarity Search with score# There are some FAISS specific methods. One of them is similarity_search_with_score, which allows you to return not only the documents but also the similarity score of the query to
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/faiss.html
6b31a89acb12-1
which allows you to return not only the documents but also the similarity score of the query to them. docs_and_scores = db.similarity_search_with_score(query) docs_and_scores[0] (Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n\nWe cannot let this happen. \n\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), 0.3914415) It is also possible to do a search for documents similar to a given embedding vector using similarity_search_by_vector which accepts an embedding vector as a parameter instead of a string. embedding_vector = embeddings.embed_query(query) docs_and_scores = db.similarity_search_by_vector(embedding_vector) Saving and loading# You can also save and load a FAISS index. This is useful so you don’t have to recreate it everytime you use it. db.save_local("faiss_index") new_db = FAISS.load_local("faiss_index", embeddings) docs =
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/faiss.html
6b31a89acb12-2
= FAISS.load_local("faiss_index", embeddings) docs = new_db.similarity_search(query) docs[0] Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n\nWe cannot let this happen. \n\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0) Merging# You can also merge two FAISS vectorstores db1 = FAISS.from_texts(["foo"], embeddings) db2 = FAISS.from_texts(["bar"], embeddings) db1.docstore._dict {'e0b74348-6c93-4893-8764-943139ec1d17': Document(page_content='foo', lookup_str='', metadata={}, lookup_index=0)} db2.docstore._dict {'bdc50ae3-a1bb-4678-9260-1b0979578f40': Document(page_content='bar', lookup_str='', metadata={},
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/faiss.html
6b31a89acb12-3
Document(page_content='bar', lookup_str='', metadata={}, lookup_index=0)} db1.merge_from(db2) db1.docstore._dict {'e0b74348-6c93-4893-8764-943139ec1d17': Document(page_content='foo', lookup_str='', metadata={}, lookup_index=0), 'd5211050-c777-493d-8825-4800e74cfdb6': Document(page_content='bar', lookup_str='', metadata={}, lookup_index=0)} previous ElasticSearch next Milvus Contents Similarity Search with score Saving and loading Merging By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/faiss.html
ab279ddec767-0
.ipynb .pdf AtlasDB AtlasDB# This notebook shows you how to use functionality related to the AtlasDB import time from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import SpacyTextSplitter from langchain.vectorstores import AtlasDB from langchain.document_loaders import TextLoader !python -m spacy download en_core_web_sm ATLAS_TEST_API_KEY = '7xDPkYXSYDc1_ErdTPIcoAR9RNd8YDlkS3nVNXcVoIMZ6' loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = SpacyTextSplitter(separator='|') texts = [] for doc in text_splitter.split_documents(documents): texts.extend(doc.page_content.split('|')) texts = [e.strip() for e in texts] db = AtlasDB.from_texts(texts=texts, name='test_index_'+str(time.time()), # unique name for your vector store description='test_index', #a description for your vector store api_key=ATLAS_TEST_API_KEY, index_kwargs={'build_topic_model': True}) db.project.wait_for_project_lock() db.project test_index_1677255228.136989 A description for your project 508
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/atlas.html
ab279ddec767-1
A description for your project 508 datums inserted. 1 index built. Projections test_index_1677255228.136989_index. Status Completed. view online Projection ID: db996d77-8981-48a0-897a-ff2c22bbf541 Hide embedded project Explore on atlas.nomic.ai previous Getting Started next Chroma By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/atlas.html
56c191d0a4d3-0
.ipynb .pdf PGVector Contents Similarity search with score Similarity Search with Euclidean Distance (Default) PGVector# This notebook shows how to use functionality related to the Postgres vector database (PGVector). ## Loading Environment Variables from typing import List, Tuple from dotenv import load_dotenv load_dotenv() from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores.pgvector import PGVector from langchain.document_loaders import TextLoader from langchain.docstore.document import Document loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() ## PGVector needs the connection string to the database. ## We will load it from the environment variables. import os CONNECTION_STRING = PGVector.connection_string_from_db_params( driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"), host=os.environ.get("PGVECTOR_HOST", "localhost"), port=int(os.environ.get("PGVECTOR_PORT", "5432")), database=os.environ.get("PGVECTOR_DATABASE", "postgres"), user=os.environ.get("PGVECTOR_USER", "postgres"), password=os.environ.get("PGVECTOR_PASSWORD", "postgres"), ) ## Example # postgresql+psycopg2://username:password@localhost:5432/database_name Similarity search with score# Similarity Search with Euclidean Distance (Default)# # The PGVector Module will try to create a table with the name of the collection. So, make sure that the collection name is unique and the user has the # permission to create a table. db = PGVector.from_documents(
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/pgvector.html
56c191d0a4d3-1
has the # permission to create a table. db = PGVector.from_documents( embedding=embeddings, documents=docs, collection_name="state_of_the_union", connection_string=CONNECTION_STRING, ) query = "What did the president say about Ketanji Brown Jackson" docs_with_score: List[Tuple[Document, float]] = db.similarity_search_with_score(query) for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.6076628081132506 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.6076628081132506 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/pgvector.html
56c191d0a4d3-2
our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.6076804780049968 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.6076804780049968 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/pgvector.html
56c191d0a4d3-3
who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- previous OpenSearch next Pinecone Contents Similarity search with score Similarity Search with Euclidean Distance (Default) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/pgvector.html
7944c52e14cc-0
.ipynb .pdf Zilliz Zilliz# This notebook shows how to use functionality related to the Zilliz Cloud managed vector database. To run, you should have a Zilliz Cloud instance up and running: https://zilliz.com/cloud from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Milvus from langchain.document_loaders import TextLoader # replace ZILLIZ_CLOUD_HOSTNAME = "" # example: "in01-17f69c292d4a50a.aws-us-west-2.vectordb.zillizcloud.com" ZILLIZ_CLOUD_PORT = "" #example: "19532" from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vector_db = Milvus.from_documents( docs, embeddings, connection_args={"host": ZILLIZ_CLOUD_HOSTNAME, "port": ZILLIZ_CLOUD_PORT}, ) docs = vector_db.similarity_search(query) docs[0] previous Weaviate next Retrievers By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/zilliz.html
a29a3d48bb95-0
.ipynb .pdf Redis Contents RedisVectorStoreRetriever Redis# This notebook shows how to use functionality related to the Redis vector database. from langchain.embeddings import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores.redis import Redis from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='link') rds.index_name 'link' query = "What did the president say about Ketanji Brown Jackson" results = rds.similarity_search(query) print(results[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. print(rds.add_texts(["Ankush went to Princeton"])) ['doc:link:d7d02e3faf1b40bbbe29a683ff75b280'] query =
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/redis.html
a29a3d48bb95-1
= "Princeton" results = rds.similarity_search(query) print(results[0].page_content) Ankush went to Princeton # Load from existing index rds = Redis.from_existing_index(embeddings, redis_url="redis://localhost:6379", index_name='link') query = "What did the president say about Ketanji Brown Jackson" results = rds.similarity_search(query) print(results[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. RedisVectorStoreRetriever# Here we go over different options for using the vector store as a retriever. There are three different search methods we can use to do retrieval. By default, it will use semantic similarity. retriever = rds.as_retriever() docs = retriever.get_relevant_documents(query) We can also use similarity_limit as a search method. This is only return documents if they are similar enough retriever = rds.as_retriever(search_type="similarity_limit") # Here we can see it doesn't return any results because there are no relevant documents retriever.get_relevant_documents("where did ankush go to
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/redis.html
a29a3d48bb95-2
there are no relevant documents retriever.get_relevant_documents("where did ankush go to college?") previous Qdrant next Weaviate Contents RedisVectorStoreRetriever By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/redis.html
171a79f5ac4b-0
.ipynb .pdf Qdrant Contents Connecting to Qdrant from LangChain Local mode In-memory On-disk storage On-premise server deployment Qdrant Cloud Reusing the same collection Similarity search Similarity search with score Maximum marginal relevance search (MMR) Qdrant as a Retriever Customizing Qdrant Qdrant# This notebook shows how to use functionality related to the Qdrant vector database. There are various modes of how to run Qdrant, and depending on the chosen one, there will be some subtle differences. The options include: Local mode, no server required On-premise server deployment Qdrant Cloud from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Qdrant from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() Connecting to Qdrant from LangChain# Local mode# Python client allows you to run the same code in local mode without running the Qdrant server. That’s great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kepy in memory or persisted on disk. In-memory# For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook. qdrant = Qdrant.from_documents( docs, embeddings, location=":memory:", # Local mode with in-memory storage only
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/qdrant.html
171a79f5ac4b-1
location=":memory:", # Local mode with in-memory storage only collection_name="my_documents", ) On-disk storage# Local mode, without using the Qdrant server, may also store your vectors on disk so they’re persisted between runs. qdrant = Qdrant.from_documents( docs, embeddings, path="/tmp/local_qdrant", collection_name="my_documents", ) On-premise server deployment# No matter if you choose to launch Qdrant locally with a Docker container, or select a Kubernetes deployment with the official Helm chart, the way you’re going to connect to such an instance will be identical. You’ll need to provide a URL pointing to the service. url = "<---qdrant url here --->" qdrant = Qdrant.from_documents( docs, embeddings, url, prefer_grpc=True, collection_name="my_documents", ) Qdrant Cloud# If you prefer not to keep yourself busy with managing the infrastructure, you can choose to set up a fully-managed Qdrant cluster on Qdrant Cloud. There is a free forever 1GB cluster included for trying out. The main difference with using a managed version of Qdrant is that you’ll need to provide an API key to secure your deployment from being accessed publicly. url = "<---qdrant cloud cluster url here --->" api_key = "<---api key here--->" qdrant = Qdrant.from_documents( docs, embeddings, url, prefer_grpc=True, api_key=api_key, collection_name="my_documents", ) Reusing the same collection# Both Qdrant.from_texts and Qdrant.from_documents methods are great to start using Qdrant with LangChain, but they are going to destroy the collection and create it
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/qdrant.html
171a79f5ac4b-2
start using Qdrant with LangChain, but they are going to destroy the collection and create it from scratch! If you want to reuse the existing collection, you can always create an instance of Qdrant on your own and pass the QdrantClient instance with the connection details. del qdrant import qdrant_client client = qdrant_client.QdrantClient( path="/tmp/local_qdrant", prefer_grpc=True ) qdrant = Qdrant( client=client, collection_name="my_documents", embedding_function=embeddings.embed_query ) Similarity search# The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded with the embedding_function and used to find similar documents in Qdrant collection. query = "What did the president say about Ketanji Brown Jackson" found_docs = qdrant.similarity_search(query) print(found_docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Similarity search with score# Sometimes we might want to perform the search, but also obtain a relevancy
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/qdrant.html
171a79f5ac4b-3
search with score# Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result. query = "What did the president say about Ketanji Brown Jackson" found_docs = qdrant.similarity_search_with_score(query) document, score = found_docs[0] print(document.page_content) print(f"\nScore: {score}") Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Score: 0.8153784913324512 Maximum marginal relevance search (MMR)# If you’d like to look up for some similar documents, but you’d also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. query = "What did the president say about Ketanji Brown Jackson" found_docs = qdrant.max_marginal_relevance_search(query, k=2, fetch_k=10) for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n") 1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/qdrant.html
171a79f5ac4b-4
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. 2. We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. Qdrant as a Retriever# Qdrant, as all the other vector stores, is a
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/qdrant.html
171a79f5ac4b-5
as a Retriever# Qdrant, as all the other vector stores, is a LangChain Retriever, by using cosine similarity. retriever = qdrant.as_retriever() retriever VectorStoreRetriever(vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7fc4e5720a00>, search_type='similarity', search_kwargs={}) It might be also specified to use MMR as a search strategy, instead of similarity. retriever = qdrant.as_retriever(search_type="mmr") retriever VectorStoreRetriever(vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7fc4e5720a00>, search_type='mmr', search_kwargs={}) query = "What did the president say about Ketanji Brown Jackson" retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}) Customizing Qdrant# Qdrant stores your vector embeddings along with the optional
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/qdrant.html
171a79f5ac4b-6
Qdrant# Qdrant stores your vector embeddings along with the optional JSON-like payload. Payloads are optional, but since LangChain assumes the embeddings are generated from the documents, we keep the context data, so you can extract the original texts as well. By default, your document is going to be stored in the following payload structure: { "page_content": "Lorem ipsum dolor sit amet", "metadata": { "foo": "bar" } } You can, however, decide to use different keys for the page content and metadata. That’s useful if you already have a collection that you’d like to reuse. You can always change the Qdrant.from_documents( docs, embeddings, location=":memory:", collection_name="my_documents_2", content_payload_key="my_page_content_key", metadata_payload_key="my_meta", ) <langchain.vectorstores.qdrant.Qdrant at 0x7fc4e2baa230> previous Pinecone next Redis Contents Connecting to Qdrant from LangChain Local mode In-memory On-disk storage On-premise server deployment Qdrant Cloud Reusing the same collection Similarity search Similarity search with score Maximum marginal relevance search (MMR) Qdrant as a Retriever Customizing Qdrant By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/qdrant.html
3b1c200012c7-0
.ipynb .pdf Milvus Milvus# This notebook shows how to use functionality related to the Milvus vector database. To run, you should have a Milvus instance up and running: https://milvus.io/docs/install_standalone-docker.md from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Milvus from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vector_db = Milvus.from_documents( docs, embeddings, connection_args={"host": "127.0.0.1", "port": "19530"}, ) docs = vector_db.similarity_search(query) docs[0] previous FAISS next OpenSearch By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/milvus.html
d748b3124776-0
.ipynb .pdf Chroma Contents Similarity search with score Persistance Initialize PeristedChromaDB Persist the Database Load the Database from disk, and create the chain Retriever options MMR Chroma# This notebook shows how to use functionality related to the Chroma vector database. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Chroma from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() db = Chroma.from_documents(docs, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = db.similarity_search(query) Using embedded DuckDB without persistence: data will be transient print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Similarity search with
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/chroma.html
d748b3124776-1
top legal minds, who will continue Justice Breyer’s legacy of excellence. Similarity search with score# docs = db.similarity_search_with_score(query) docs[0] (Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n\nWe cannot let this happen. \n\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), 0.3913410007953644) Persistance# The below steps cover how to persist a ChromaDB instance Initialize PeristedChromaDB# Create embeddings for each chunk and insert into the Chroma vector database. The persist_directory argument tells ChromaDB where to store the database when it’s persisted. # Embed and store the texts # Supplying a persist_directory will store the embeddings on disk persist_directory = 'db' embedding = OpenAIEmbeddings() vectordb = Chroma.from_documents(documents=docs, embedding=embedding, persist_directory=persist_directory) Running Chroma using direct local API. No existing
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/chroma.html
d748b3124776-2
embedding=embedding, persist_directory=persist_directory) Running Chroma using direct local API. No existing DB found in db, skipping load No existing DB found in db, skipping load Persist the Database# We should call persist() to ensure the embeddings are written to disk. vectordb.persist() vectordb = None Persisting DB to disk, putting it in the save folder db PersistentDuckDB del, about to run persist Persisting DB to disk, putting it in the save folder db Load the Database from disk, and create the chain# Be sure to pass the same persist_directory and embedding_function as you did when you instantiated the database. Initialize the chain we will use for question answering. # Now we can load the persisted database from disk, and use it as normal. vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding) Running Chroma using direct local API. loaded in 4 embeddings loaded in 1 collections Retriever options# This section goes over different options for how to use Chroma as a retriever. MMR# In addition to using similarity search in the retriever object, you can also use mmr. retriever = db.as_retriever(search_type="mmr") retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/chroma.html
d748b3124776-3
of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}) previous AtlasDB next Deep Lake Contents Similarity search with score Persistance Initialize PeristedChromaDB Persist the Database Load the Database from disk, and create the chain Retriever options MMR By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/chroma.html
4c67f0efe853-0
.ipynb .pdf Deep Lake Contents Retrieval Question/Answering Attribute based filtering in metadata Choosing distance function Maximal Marginal relevance Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or local Deep Lake# This notebook showcases basic functionality related to Deep Lake. While Deep Lake can store embeddings, it is capable of storing any type of data. It is a fully fledged serverless data lake with version control, query engine and streaming dataloader to deep learning frameworks. For more information, please see the Deep Lake documentation or api reference !python3 -m pip install openai deeplake from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import DeepLake from langchain.document_loaders import TextLoader import os os.environ['OPENAI_API_KEY'] = 'sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() db = DeepLake.from_documents(docs, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = db.similarity_search(query) print(docs[0].page_content) Retrieval Question/Answering# from langchain.chains import RetrievalQA from langchain.llms import OpenAIChat qa = RetrievalQA.from_chain_type(llm=OpenAIChat(model='gpt-3.5-turbo'), chain_type='stuff', retriever=db.as_retriever()) query = 'What did the president say about Ketanji Brown Jackson' qa.run(query) Attribute based filtering in metadata# import
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/deeplake.html
4c67f0efe853-1
president say about Ketanji Brown Jackson' qa.run(query) Attribute based filtering in metadata# import random for d in docs: d.metadata['year'] = random.randint(2012, 2014) db = DeepLake.from_documents(docs, embeddings) db.similarity_search('What did the president say about Ketanji Brown Jackson', filter={'year': 2013}) Choosing distance function# Distance function L2 for Euclidean, L1 for Nuclear, Max l-infinity distnace, cos for cosine similarity, dot for dot product db.similarity_search('What did the president say about Ketanji Brown Jackson?', distance_metric='cos') Maximal Marginal relevance# Using maximal marginal relevance db.max_marginal_relevance_search('What did the president say about Ketanji Brown Jackson?') Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or local# By default deep lake datasets are stored in memory, in case you want to persist locally or to any object storage you can simply provide path to the dataset. You can retrieve token from app.activeloop.ai !activeloop login -t <token> # Embed and store the texts dataset_path = "hub://{username}/{dataset_name}" # could be also ./local/path (much faster locally), s3://bucket/path/to/dataset, gcs://path/to/dataset, etc. embedding = OpenAIEmbeddings() vectordb = DeepLake.from_documents(documents=docs, embedding=embedding, dataset_path=dataset_path) query = "What did the president say about Ketanji Brown Jackson" docs = db.similarity_search(query) print(docs[0].page_content) vectordb.ds.summary() embeddings = vectordb.ds.embedding.numpy() previous Chroma next ElasticSearch Contents Retrieval Question/Answering Attribute based filtering in metadata Choosing distance
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/deeplake.html
4c67f0efe853-2
Contents Retrieval Question/Answering Attribute based filtering in metadata Choosing distance function Maximal Marginal relevance Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or local By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/deeplake.html
8661cce5ecdb-0
.ipynb .pdf Weaviate Weaviate# This notebook shows how to use functionality related to the Weaviate vector database. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Weaviate from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() import weaviate import os WEAVIATE_URL = "" client = weaviate.Client( url=WEAVIATE_URL, additional_headers={ 'X-OpenAI-Api-Key': os.environ["OPENAI_API_KEY"] } ) client.schema.delete_all() client.schema.get() schema = { "classes": [ { "class": "Paragraph", "description": "A written paragraph", "vectorizer": "text2vec-openai", "moduleConfig": { "text2vec-openai": { "model": "babbage", "type": "text" }
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/weaviate.html
8661cce5ecdb-1
} }, "properties": [ { "dataType": ["text"], "description": "The content of the paragraph", "moduleConfig": { "text2vec-openai": { "skip": False, "vectorizePropertyName": False } }, "name": "content", }, ], }, ] } client.schema.create(schema) vectorstore = Weaviate(client, "Paragraph", "content") query = "What did the president say about Ketanji Brown Jackson" docs =
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/weaviate.html
8661cce5ecdb-2
"content") query = "What did the president say about Ketanji Brown Jackson" docs = vectorstore.similarity_search(query) print(docs[0].page_content) previous Redis next Zilliz By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/weaviate.html
2654df33e1a8-0
.ipynb .pdf OpenSearch Contents similarity_search using Approximate k-NN Search with Custom Parameters similarity_search using Script Scoring with Custom Parameters similarity_search using Painless Scripting with Custom Parameters Using a preexisting OpenSearch instance OpenSearch# This notebook shows how to use functionality related to the OpenSearch database. To run, you should have the opensearch instance up and running: here similarity_search by default performs the Approximate k-NN Search which uses one of the several algorithms like lucene, nmslib, faiss recommended for large datasets. To perform brute force search we have other search methods known as Script Scoring and Painless Scripting. Check this for more details. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import OpenSearchVectorSearch from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200") query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) print(docs[0].page_content) similarity_search using Approximate k-NN Search with Custom Parameters# docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", engine="faiss", space_type="innerproduct", ef_construction=256, m=48) query = "What did the president say about Ketanji Brown Jackson" docs =
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/opensearch.html
2654df33e1a8-1
= "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) print(docs[0].page_content) similarity_search using Script Scoring with Custom Parameters# docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search("What did the president say about Ketanji Brown Jackson", k=1, search_type="script_scoring") print(docs[0].page_content) similarity_search using Painless Scripting with Custom Parameters# docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False) filter = {"bool": {"filter": {"term": {"text": "smuggling"}}}} query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search("What did the president say about Ketanji Brown Jackson", search_type="painless_scripting", space_type="cosineSimilarity", pre_filter=filter) print(docs[0].page_content) Using a preexisting OpenSearch instance# It’s also possible to use a preexisting OpenSearch instance with documents that already have vectors present. # this is just an example, you would need to change these values to point to another opensearch instance docsearch = OpenSearchVectorSearch(index_name="index-*", embedding_function=embeddings, opensearch_url="http://localhost:9200") # you can specify custom field names to match the fields you're using to store your embedding, document text value, and metadata docs = docsearch.similarity_search("Who was asking about getting lunch today?", search_type="script_scoring", space_type="cosinesimil", vector_field="message_embedding",
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/opensearch.html
2654df33e1a8-2
space_type="cosinesimil", vector_field="message_embedding", text_field="message", metadata_field="message_metadata") previous Milvus next PGVector Contents similarity_search using Approximate k-NN Search with Custom Parameters similarity_search using Script Scoring with Custom Parameters similarity_search using Painless Scripting with Custom Parameters Using a preexisting OpenSearch instance By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/opensearch.html
2a74812b259b-0
.ipynb .pdf Pinecone Pinecone# This notebook shows how to use functionality related to the Pinecone vector database. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Pinecone from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() import pinecone # initialize pinecone pinecone.init( api_key="YOUR_API_KEY", # find at app.pinecone.io environment="YOUR_ENV" # next to api key in console ) index_name = "langchain-demo" docsearch = Pinecone.from_documents(docs, embeddings, index_name=index_name) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) print(docs[0].page_content) previous PGVector next Qdrant By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/pinecone.html
176be5b6813f-0
.ipynb .pdf ElasticSearch ElasticSearch# This notebook shows how to use functionality related to the ElasticSearch database. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import ElasticVectorSearch from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() db = ElasticVectorSearch.from_documents(docs, embeddings, elasticsearch_url="http://localhost:9200" query = "What did the president say about Ketanji Brown Jackson" docs = db.similarity_search(query) print(docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. previous Deep Lake next FAISS By
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html
176be5b6813f-1
Justice Breyer’s legacy of excellence. previous Deep Lake next FAISS By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html
84e6793009b3-0
.ipynb .pdf Airbyte JSON Airbyte JSON# This covers how to load any source from Airbyte into a local JSON file that can be read in as a document Prereqs: Have docker desktop installed Steps: Clone Airbyte from GitHub - git clone https://github.com/airbytehq/airbyte.git Switch into Airbyte directory - cd airbyte Start Airbyte - docker compose up In your browser, just visit http://localhost:8000. You will be asked for a username and password. By default, that’s username airbyte and password password. Setup any source you wish. Set destination as Local JSON, with specified destination path - lets say /json_data. Set up manual sync. Run the connection! To see what files are create, you can navigate to: file:///tmp/airbyte_local Find your data and copy path. That path should be saved in the file variable below. It should start with /tmp/airbyte_local from langchain.document_loaders import AirbyteJSONLoader !ls /tmp/airbyte_local/json_data/ _airbyte_raw_pokemon.jsonl loader = AirbyteJSONLoader('/tmp/airbyte_local/json_data/_airbyte_raw_pokemon.jsonl') data = loader.load() print(data[0].page_content[:500]) abilities: ability: name: blaze url: https://pokeapi.co/api/v2/ability/66/ is_hidden: False slot: 1 ability: name: solar-power url: https://pokeapi.co/api/v2/ability/94/ is_hidden: True slot: 3 base_experience: 267 forms: name: charizard url: https://pokeapi.co/api/v2/pokemon-form/6/ game_indices: game_index: 180 version: name: red url:
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/airbyte_json.html
84e6793009b3-1
game_index: 180 version: name: red url: https://pokeapi.co/api/v2/version/1/ game_index: 180 version: name: blue url: https://pokeapi.co/api/v2/version/2/ game_index: 180 version: n previous CoNLL-U next Apify Dataset By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/airbyte_json.html
48666f06c48e-0
.ipynb .pdf Figma Figma# This notebook covers how to load data from the Figma REST API into a format that can be ingested into LangChain, along with example usage for code generation. import os from langchain.document_loaders.figma import FigmaFileLoader from langchain.text_splitter import CharacterTextSplitter from langchain.chat_models import ChatOpenAI from langchain.indexes import VectorstoreIndexCreator from langchain.chains import ConversationChain, LLMChain from langchain.memory import ConversationBufferWindowMemory from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) The Figma API Requires an access token, node_ids, and a file key. The file key can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename Node IDs are also available in the URL. Click on anything and look for the ‘?node-id={node_id}’ param. Access token instructions are in the Figma help center article: https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens figma_loader = FigmaFileLoader( os.environ.get('ACCESS_TOKEN'), os.environ.get('NODE_IDS'), os.environ.get('FILE_KEY') ) # see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details index = VectorstoreIndexCreator().from_loaders([figma_loader]) figma_doc_retriever = index.vectorstore.as_retriever() def generate_code(human_input): # I have no idea if the Jon Carmack thing makes for better code. YMMV. # See
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/figma.html
48666f06c48e-1
if the Jon Carmack thing makes for better code. YMMV. # See https://python.langchain.com/en/latest/modules/models/chat/getting_started.html for chat info system_prompt_template = """You are expert coder Jon Carmack. Use the provided design context to create idomatic HTML/CSS code as possible based on the user request. Everything must be inline in one file and your response must be directly renderable by the browser. Figma file nodes and metadata: {context}""" human_prompt_template = "Code the {text}. Ensure it's mobile responsive" system_message_prompt = SystemMessagePromptTemplate.from_template(system_prompt_template) human_message_prompt = HumanMessagePromptTemplate.from_template(human_prompt_template) # delete the gpt-4 model_name to use the default gpt-3.5 turbo for faster results gpt_4 = ChatOpenAI(temperature=.02, model_name='gpt-4') # Use the retriever's 'get_relevant_documents' method if needed to filter down longer docs relevant_nodes = figma_doc_retriever.get_relevant_documents(human_input) conversation = [system_message_prompt, human_message_prompt] chat_prompt = ChatPromptTemplate.from_messages(conversation) response = gpt_4(chat_prompt.format_prompt( context=relevant_nodes, text=human_input).to_messages()) return response response = generate_code("page top header") Returns the following in response.content: <!DOCTYPE html>\n<html lang="en">\n<head>\n <meta charset="UTF-8">\n <meta name="viewport" content="width=device-width,
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/figma.html
48666f06c48e-2
<meta name="viewport" content="width=device-width, initial-scale=1.0">\n <style>\n @import url(\'https://fonts.googleapis.com/css2?family=DM+Sans:wght@500;700&family=Inter:wght@600&display=swap\');\n\n body {\n margin: 0;\n font-family: \'DM Sans\', sans-serif;\n }\n\n .header {\n display: flex;\n justify-content: space-between;\n align-items: center;\n padding: 20px;\n background-color: #fff;\n box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);\n }\n\n .header h1 {\n font-size: 16px;\n font-weight: 700;\n margin: 0;\n }\n\n .header nav {\n display: flex;\n align-items:
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/figma.html
48666f06c48e-3
display: flex;\n align-items: center;\n }\n\n .header nav a {\n font-size: 14px;\n font-weight: 500;\n text-decoration: none;\n color: #000;\n margin-left: 20px;\n }\n\n @media (max-width: 768px) {\n .header nav {\n display: none;\n }\n }\n </style>\n</head>\n<body>\n <header class="header">\n <h1>Company Contact</h1>\n <nav>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n </nav>\n </header>\n</body>\n</html> previous Facebook Chat next GCS Directory By Harrison Chase © Copyright 2023, Harrison Chase.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/figma.html
48666f06c48e-4
© Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/figma.html
941bdf8325ae-0
.ipynb .pdf BigQuery Loader Contents Basic Usage Specifying Which Columns are Content vs Metadata Adding Source to Metadata BigQuery Loader# Load a BigQuery query with one document per row. from langchain.document_loaders import BigQueryLoader BASE_QUERY = ''' SELECT id, dna_sequence, organism FROM ( SELECT ARRAY ( SELECT AS STRUCT 1 AS id, "ATTCGA" AS dna_sequence, "Lokiarchaeum sp. (strain GC14_75)." AS organism UNION ALL SELECT AS STRUCT 2 AS id, "AGGCGA" AS dna_sequence, "Heimdallarchaeota archaeon (strain LC_2)." AS organism UNION ALL SELECT AS STRUCT 3 AS id, "TCCGGA" AS dna_sequence, "Acidianus hospitalis (strain W1)." AS organism) AS new_array), UNNEST(new_array) ''' Basic Usage# loader = BigQueryLoader(BASE_QUERY) data = loader.load() print(data) [Document(page_content='id: 1\ndna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 2\ndna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 3\ndna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={}, lookup_index=0)] Specifying Which Columns are Content vs Metadata# loader =
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/bigquery.html
941bdf8325ae-1
metadata={}, lookup_index=0)] Specifying Which Columns are Content vs Metadata# loader = BigQueryLoader(BASE_QUERY, page_content_columns=["dna_sequence", "organism"], metadata_columns=["id"]) data = loader.load() print(data) [Document(page_content='dna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={'id': 1}, lookup_index=0), Document(page_content='dna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={'id': 2}, lookup_index=0), Document(page_content='dna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={'id': 3}, lookup_index=0)] Adding Source to Metadata# # Note that the `id` column is being returned twice, with one instance aliased as `source` ALIASED_QUERY = ''' SELECT id, dna_sequence, organism, id as source FROM ( SELECT ARRAY ( SELECT AS STRUCT 1 AS id, "ATTCGA" AS dna_sequence, "Lokiarchaeum sp. (strain GC14_75)." AS organism UNION ALL SELECT AS STRUCT 2 AS id, "AGGCGA" AS dna_sequence, "Heimdallarchaeota archaeon (strain LC_2)." AS organism UNION ALL SELECT AS STRUCT 3 AS id, "TCCGGA" AS dna_sequence, "Acidianus hospitalis (strain W1)." AS organism) AS new_array), UNNEST(new_array) ''' loader =
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/bigquery.html
941bdf8325ae-2
W1)." AS organism) AS new_array), UNNEST(new_array) ''' loader = BigQueryLoader(ALIASED_QUERY, metadata_columns=["source"]) data = loader.load() print(data) [Document(page_content='id: 1\ndna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).\nsource: 1', lookup_str='', metadata={'source': 1}, lookup_index=0), Document(page_content='id: 2\ndna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).\nsource: 2', lookup_str='', metadata={'source': 2}, lookup_index=0), Document(page_content='id: 3\ndna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).\nsource: 3', lookup_str='', metadata={'source': 3}, lookup_index=0)] previous Azure Blob Storage File next Bilibili Contents Basic Usage Specifying Which Columns are Content vs Metadata Adding Source to Metadata By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/bigquery.html
cce0cdad36f6-0
.ipynb .pdf PowerPoint Contents Retain Elements PowerPoint# This covers how to load PowerPoint documents into a document format that we can use downstream. from langchain.document_loaders import UnstructuredPowerPointLoader loader = UnstructuredPowerPointLoader("example_data/fake-power-point.pptx") data = loader.load() data [Document(page_content='Adding a Bullet Slide\n\nFind the bullet slide layout\n\nUse _TextFrame.text for first bullet\n\nUse _TextFrame.add_paragraph() for subsequent bullets\n\nHere is a lot of text!\n\nHere is some text in a text box!', lookup_str='', metadata={'source': 'example_data/fake-power-point.pptx'}, lookup_index=0)] Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredPowerPointLoader("example_data/fake-power-point.pptx", mode="elements") data = loader.load() data[0] Document(page_content='Adding a Bullet Slide', lookup_str='', metadata={'source': 'example_data/fake-power-point.pptx'}, lookup_index=0) previous PDF next ReadTheDocs Documentation Contents Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/powerpoint.html
53c2d291c684-0
.ipynb .pdf EPubs Contents Retain Elements EPubs# This covers how to load .epub documents into a document format that we can use downstream. You’ll need to install the pandocs package for this loader to work. from langchain.document_loaders import UnstructuredEPubLoader loader = UnstructuredEPubLoader("winter-sports.epub") data = loader.load() Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredEPubLoader("winter-sports.epub", mode="elements") data = loader.load() data[0] Document(page_content='The Project Gutenberg eBook of Winter Sports in\nSwitzerland, by E. F. Benson', lookup_str='', metadata={'source': 'winter-sports.epub', 'page_number': 1, 'category': 'Title'}, lookup_index=0) previous Email next EverNote Contents Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/epub.html
1ce6e931f6fd-0
.ipynb .pdf Azure Blob Storage File Azure Blob Storage File# This covers how to load document objects from a Azure Blob Storage file. from langchain.document_loaders import AzureBlobStorageFileLoader #!pip install azure-storage-blob loader = AzureBlobStorageFileLoader(conn_str='<connection string>', container='<container name>', blob_name='<blob name>') loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)] previous Azure Blob Storage Container next BigQuery Loader By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_file.html
7c8bdc6a7c45-0
.ipynb .pdf Git Contents Load existing repository from disk Clone repository from url Git# This notebook shows how to load text files from Git repository. Load existing repository from disk# from git import Repo repo = Repo.clone_from( "https://github.com/hwchase17/langchain", to_path="./example_data/test_repo1" ) branch = repo.head.reference from langchain.document_loaders.git import GitLoader loader = GitLoader(repo_path="./example_data/test_repo1/", branch=branch) data = loader.load() len(data) 1040 print(data[0]) page_content='.venv\n.github\n.git\n.mypy_cache\n.pytest_cache\nDockerfile' metadata={'file_path': '.dockerignore', 'file_name': '.dockerignore', 'file_type': ''} Clone repository from url# from langchain.document_loaders.git import GitLoader loader = GitLoader( clone_url="https://github.com/hwchase17/langchain", repo_path="./example_data/test_repo2/", branch="master", ) data = loader.load() len(data) 1040 previous GCS File Storage next GitBook Contents Load existing repository from disk Clone repository from url By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/git.html
6c92d1499b74-0
.ipynb .pdf Images Contents Using Unstructured Retain Elements Images# This covers how to load images such as JPGs PNGs into a document format that we can use downstream. Using Unstructured# from langchain.document_loaders.image import UnstructuredImageLoader loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg") data = loader.load() data[0] Document(page_content="LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n\n\n‘Zxjiang Shen' (F3}, Ruochen Zhang”, Melissa Dell*, Benjamin Charles Germain\nLeet, Jacob Carlson, and Weining LiF\n\n\nsugehen\n\nshangthrows, et\n\n“Abstract. Recent advanocs in document image analysis (DIA) have been\n‘pimarliy driven bythe application of neural networks dell roar\n{uteomer could be aly deployed in production and extended fo farther\n[nvetigtion. However, various factory ke lcely organize codebanee\nsnd sophisticated modal cnigurations compat the ey ree of\n‘erin! innovation by wide sence, Though there have been sng\n‘Hors to improve reuablty and simplify deep lees (DL) mode\n‘aon, sone of them ae optimized for challenge inthe demain of DIA,\nThis roprscte a major gap in the extng fol, sw DIA i eal to\nscademic research acon wie range of dpi in the social ssencee\n[rary for streamlining the sage of DL in DIA research and appicn\n‘tons The core LayoutFaraer brary comes with a sch of simple and\nIntative interfaee or applying and eutomiing DI. odel fr Inyo de\npltfom for sharing both protrined modes an fal document dist\n{ation pipeline We demonutate
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/image.html
6c92d1499b74-1
for sharing both protrined modes an fal document dist\n{ation pipeline We demonutate that LayootPareer shea fr both\nlightweight and lrgeseledgtieation pipelines in eal-word uae ces\nThe leary pblely smal at Btspe://layost-pareergsthab So\n\n\n\n‘Keywords: Document Image Analysis» Deep Learning Layout Analysis\n‘Character Renguition - Open Serres dary « Tol\n\n\nIntroduction\n\n\n‘Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndoctiment image analysis (DIA) tea including document image clasiffeation [I]\n", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0) Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg", mode="elements") data = loader.load() data[0] Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg', 'filename': 'layout-parser-paper-fast.jpg', 'page_number': 1, 'category': 'Title'}, lookup_index=0) previous iFixit next IMSDb Contents Using Unstructured Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/image.html
e9f575e333ce-0
.ipynb .pdf Subtitle Files Subtitle Files# How to load data from subtitle (.srt) files from langchain.document_loaders import SRTLoader loader = SRTLoader("example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt") docs = loader.load() docs[0].page_content[:100] '<i>Corruption discovered\nat the core of the Banking Clan!</i> <i>Reunited, Rush Clovis\nand Senator A' previous Slack (Local Exported Zipfile) next Telegram By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/srt.html
eae52a5f354a-0
.ipynb .pdf Notion Contents 🧑 Instructions for ingesting your own dataset Notion# This notebook covers how to load documents from a Notion database dump. In order to get this notion dump, follow these instructions: 🧑 Instructions for ingesting your own dataset# Export your dataset from Notion. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export. When exporting, make sure to select the Markdown & CSV format option. This will produce a .zip file in your Downloads folder. Move the .zip file into this repository. Run the following command to unzip the zip file (replace the Export... with your own file name as needed). unzip Export-d3adfe0f-3131-4bf3-8987-a52017fc1bae.zip -d Notion_DB Run the following command to ingest the data. from langchain.document_loaders import NotionDirectoryLoader loader = NotionDirectoryLoader("Notion_DB") docs = loader.load() previous Notebook next Notion DB Loader Contents 🧑 Instructions for ingesting your own dataset By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/notion.html
fb3cc14ed989-0
.ipynb .pdf EverNote EverNote# How to load EverNote file from disk. # !pip install pypandoc # import pypandoc # pypandoc.download_pandoc() from langchain.document_loaders import EverNoteLoader loader = EverNoteLoader("example_data/testing.enex") loader.load() [Document(page_content='testing this\n\nwhat happens?\n\nto the world?\n', lookup_str='', metadata={'source': 'example_data/testing.enex'}, lookup_index=0)] previous EPubs next Facebook Chat By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/evernote.html
4e47e2492c02-0
.ipynb .pdf GCS Directory Contents Specifying a prefix GCS Directory# This covers how to load document objects from an Google Cloud Storage (GCS) directory. from langchain.document_loaders import GCSDirectoryLoader # !pip install google-cloud-storage loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc") loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpz37njh7u/fake.docx'}, lookup_index=0)] Specifying a prefix# You can also specify a prefix for more finegrained
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/gcs_directory.html
4e47e2492c02-1
a prefix# You can also specify a prefix for more finegrained control over what files to load. loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc", prefix="fake") loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpylg6291i/fake.docx'}, lookup_index=0)] previous Figma next GCS File Storage Contents Specifying a prefix By Harrison Chase © Copyright 2023, Harrison Chase. Last
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/gcs_directory.html