id
stringlengths 14
16
| text
stringlengths 29
2.31k
| source
stringlengths 57
122
|
---|---|---|
3d0c51dada59-7
|
Brewers\nPayroll in millions: 97.65\nWins: 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='MLB Team: Phillies\nPayroll in millions: 174.54\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='MLB Team: Diamondbacks\nPayroll in millions: 74.28\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='MLB Team: Pirates\nPayroll in millions: 63.43\nWins: 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='MLB Team: Padres\nPayroll in millions: 55.24\nWins: 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='MLB Team: Mariners\nPayroll in millions: 81.97\nWins: 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='MLB Team: Mets\nPayroll in millions: 93.35\nWins: 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='MLB Team: Blue Jays\nPayroll in millions: 75.48\nWins: 73', lookup_str='',
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/csv.html
|
3d0c51dada59-8
|
Jays\nPayroll in millions: 75.48\nWins: 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='MLB Team: Royals\nPayroll in millions: 60.91\nWins: 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='MLB Team: Marlins\nPayroll in millions: 118.07\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='MLB Team: Red Sox\nPayroll in millions: 173.18\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='MLB Team: Indians\nPayroll in millions: 78.43\nWins: 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='MLB Team: Twins\nPayroll in millions: 94.08\nWins: 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='MLB Team: Rockies\nPayroll in millions: 78.06\nWins: 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='MLB Team: Cubs\nPayroll in millions: 88.19\nWins: 61', lookup_str='',
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/csv.html
|
3d0c51dada59-9
|
Cubs\nPayroll in millions: 88.19\nWins: 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0), Document(page_content='MLB Team: Astros\nPayroll in millions: 60.65\nWins: 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 30}, lookup_index=0)]
Specify a column to be used identify the document source#
Use the source_column argument to specify a column to be set as the source for the document created from each row. Otherwise file_path will be used as the source for all documents created from the csv file.
This is useful when using documents loaded from CSV files for chains that answer questions using sources.
loader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv', source_column="Team")
data = loader.load()
print(data)
[Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': 'Nationals', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': 'Reds', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': 'Yankees', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': 'Giants', 'row': 3}, lookup_index=0), Document(page_content='Team:
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/csv.html
|
3d0c51dada59-10
|
'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': 'Braves', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source': 'Athletics', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': 'Rangers', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': 'Orioles', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': 'Rays', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': 'Angels', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': 'Tigers', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': 'Cardinals', 'row': 11}, lookup_index=0),
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/csv.html
|
3d0c51dada59-11
|
metadata={'source': 'Cardinals', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source': 'Dodgers', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': 'White Sox', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': 'Brewers', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': 'Phillies', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': 'Diamondbacks', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': 'Pirates', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': 'Padres', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source': 'Mariners', 'row':
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/csv.html
|
3d0c51dada59-12
|
75', lookup_str='', metadata={'source': 'Mariners', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': 'Mets', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': 'Blue Jays', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': 'Royals', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': 'Marlins', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': 'Red Sox', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': 'Indians', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': 'Twins', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source':
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/csv.html
|
3d0c51dada59-13
|
78.06\n"Wins": 64', lookup_str='', metadata={'source': 'Rockies', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': 'Cubs', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': 'Astros', 'row': 29}, lookup_index=0)]
previous
Copy Paste
next
DataFrame Loader
Contents
Customizing the csv parsing and loading
Specify a column to be used identify the document source
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/csv.html
|
7ce20022ef8c-0
|
.ipynb
.pdf
Roam
Contents
🧑 Instructions for ingesting your own dataset
Roam#
This notebook covers how to load documents from a Roam database. This takes a lot of inspiration from the example repo here.
🧑 Instructions for ingesting your own dataset#
Export your dataset from Roam Research. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export.
When exporting, make sure to select the Markdown & CSV format option.
This will produce a .zip file in your Downloads folder. Move the .zip file into this repository.
Run the following command to unzip the zip file (replace the Export... with your own file name as needed).
unzip Roam-Export-1675782732639.zip -d Roam_DB
from langchain.document_loaders import RoamLoader
loader = ObsidianLoader("Roam_DB")
docs = loader.load()
previous
ReadTheDocs Documentation
next
s3 Directory
Contents
🧑 Instructions for ingesting your own dataset
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/roam.html
|
4b6f6d143cb8-0
|
.ipynb
.pdf
GCS File Storage
GCS File Storage#
This covers how to load document objects from an Google Cloud Storage (GCS) file object.
from langchain.document_loaders import GCSFileLoader
# !pip install google-cloud-storage
loader = GCSFileLoader(project_name="aist", bucket="testing-hwc", blob="fake.docx")
loader.load()
/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/
warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmp3srlf8n8/fake.docx'}, lookup_index=0)]
previous
GCS Directory
next
Git
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/gcs_file.html
|
3683b230a305-0
|
.ipynb
.pdf
AZLyrics
AZLyrics#
This covers how to load AZLyrics webpages into a document format that we can use downstream.
from langchain.document_loaders import AZLyricsLoader
loader = AZLyricsLoader("https://www.azlyrics.com/lyrics/mileycyrus/flowers.html")
data = loader.load()
data
[Document(page_content="Miley Cyrus - Flowers Lyrics | AZLyrics.com\n\r\nWe were good, we were gold\nKinda dream that can't be sold\nWe were right till we weren't\nBuilt a home and watched it burn\n\nI didn't wanna leave you\nI didn't wanna lie\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than you can\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\n\nPaint my nails, cherry red\nMatch the roses that you left\nNo remorse, no regret\nI forgive every word you said\n\nI didn't wanna leave you, baby\nI didn't wanna fight\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours, yeah\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than you can\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\nCan love me better\nI\n\nI didn't wanna wanna leave you\nI didn't wanna
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/azlyrics.html
|
3683b230a305-1
|
love me better\nI\n\nI didn't wanna wanna leave you\nI didn't wanna fight\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours (Yeah)\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than\nYeah, I can love me better than you can, uh\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby (Than you can)\nCan love me better\nI can love me better, baby\nCan love me better\nI\n", lookup_str='', metadata={'source': 'https://www.azlyrics.com/lyrics/mileycyrus/flowers.html'}, lookup_index=0)]
previous
Apify Dataset
next
Azure Blob Storage Container
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/azlyrics.html
|
7eccbe5a6f4f-0
|
.ipynb
.pdf
iFixit
Contents
Searching iFixit using /suggest
iFixit#
iFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY-NC-SA 3.0.
This loader will allow you to download the text of a repair guide, text of Q&A’s and wikis from devices on iFixit using their open APIs. It’s incredibly useful for context related to technical documents and answers to questions about devices in the corpus of data on iFixit.
from langchain.document_loaders import IFixitLoader
loader = IFixitLoader("https://www.ifixit.com/Teardown/Banana+Teardown/811")
data = loader.load()
data
[Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/ifixit.html
|
7eccbe5a6f4f-1
|
open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)]
loader = IFixitLoader("https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself")
data = loader.load()
data
[Document(page_content='# My iPhone 6 is typing and opening apps by itself\nmy iphone 6 is typing and opening apps by itself. How do i fix this. I just bought it last week.\nI restored as manufactures cleaned up the screen\nthe problem continues\n\n## 27 Answers\n\nFilter by: \n\nMost Helpful\nNewest\nOldest\n\n### Accepted Answer\nHi,\nWhere did you buy it? If you bought it from Apple or from an official retailer like Carphone warehouse etc. Then you\'ll have a year warranty and can get it replaced free.\nIf you bought it second hand, from a third part repair shop or online, then it may still have warranty, unless it is refurbished and has been repaired elsewhere.\nIf this is the case, it may be the screen that needs replacing to solve your issue.\nEither way, wherever you got it, it\'s best to return it and get a refund or a replacement device. :-)\n\n\n\n### Most Helpful Answer\nI had the same issues, screen freezing, opening apps by itself, selecting the screens and typing on it\'s own. I first suspected aliens and then
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/ifixit.html
|
7eccbe5a6f4f-2
|
by itself, selecting the screens and typing on it\'s own. I first suspected aliens and then ghosts and then hackers.\niPhone 6 is weak physically and tend to bend on pressure. And my phone had no case or cover.\nI took the phone to apple stores and they said sensors need to be replaced and possibly screen replacement as well. My phone is just 17 months old.\nHere is what I did two days ago and since then it is working like a charm..\nHold the phone in portrait (as if watching a movie). Twist it very very gently. do it few times.Rest the phone for 10 mins (put it on a flat surface). You can now notice those self typing things gone and screen getting stabilized.\nThen, reset the hardware (hold the power and home button till the screen goes off and comes back with apple logo). release the buttons when you see this.\nThen, connect to your laptop and log in to iTunes and reset your phone completely. (please take a back-up first).\nAnd your phone should be good to use again.\nWhat really happened here for me is that the sensors might have stuck to the screen and with mild twisting, they got disengaged/released.\nI posted this in Apple Community and the moderators deleted it, for the best reasons known to them.\nInstead of throwing away your phone (or selling cheaply), try this and you could be saving your phone.\nLet me know how it goes.\n\n\n\n### Other Answer\nIt was the charging cord! I bought a gas station braided cord and it was the culprit. Once I plugged my OEM cord into the phone the GHOSTS went away.\n\n\n\n### Other Answer\nI\'ve same issue that I just get resolved. I first tried to restore it from iCloud back, however it was not a software issue or any virus issue, so after restore same problem continues. Then I get my
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/ifixit.html
|
7eccbe5a6f4f-3
|
not a software issue or any virus issue, so after restore same problem continues. Then I get my phone to local area iphone repairing lab, and they detected that it is an LCD issue. LCD get out of order without any reason (It was neither hit or nor slipped, but LCD get out of order all and sudden, while using it) it started opening things at random. I get LCD replaced with new one, that cost me $80.00 in total ($70.00 LCD charges + $10.00 as labor charges to fix it). iPhone is back to perfect mode now. It was iphone 6s. Thanks.\n\n\n\n### Other Answer\nI was having the same issue with my 6 plus, I took it to a repair shop, they opened the phone, disconnected the three ribbons the screen has, blew up and cleaned the connectors and connected the screen again and it solved the issue… it’s hardware, not software.\n\n\n\n### Other Answer\nHey.\nJust had this problem now. As it turns out, you just need to plug in your phone. I use a case and when I took it off I noticed that there was a lot of dust and dirt around the areas that the case didn\'t cover. I shined a light in my ports and noticed they were filled with dust. Tomorrow I plan on using pressurized air to clean it out and the problem should be solved. If you plug in your phone and unplug it and it stops the issue, I recommend cleaning your phone thoroughly.\n\n\n\n### Other Answer\nI simply changed the power supply and problem was gone. The block that plugs in the wall not the sub cord. The cord was fine but not the block.\n\n\n\n### Other Answer\nSomeone ask! I purchased my iPhone 6s Plus for 1000 from at&t. Before I touched it, I purchased a otter defender
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/ifixit.html
|
7eccbe5a6f4f-4
|
for 1000 from at&t. Before I touched it, I purchased a otter defender case. I read where at&t said touch desease was due to dropping! Bullshit!! I am 56 I have never dropped it!! Looks brand new! Never dropped or abused any way! I have my original charger. I am going to clean it and try everyone’s advice. It really sucks! I had 40,000,000 on my heart of Vegas slots! I play every day. I would be spinning and my fingers were no where max buttons and it would light up and switch to max. It did it 3 times before I caught it light up by its self. It sucks. Hope I can fix it!!!!\n\n\n\n### Other Answer\nNo answer, but same problem with iPhone 6 plus--random, self-generated jumping amongst apps and typing on its own--plus freezing regularly (aha--maybe that\'s what the "plus" in "6 plus" refers to?). An Apple Genius recommended upgrading to iOS 11.3.1 from 11.2.2, to see if that fixed the trouble. If it didn\'t, Apple will sell me a new phone for $168! Of couese the OS upgrade didn\'t fix the problem. Thanks for helping me figure out that it\'s most likely a hardware problem--which the "genius" probably knows too.\nI\'m getting ready to go Android.\n\n\n\n### Other Answer\nI experienced similar ghost touches. Two weeks ago, I changed my iPhone 6 Plus shell (I had forced the phone into it because it’s pretty tight), and also put a new glass screen protector (the edges of the protector don’t stick to the screen, weird, so I brushed pressure on the edges at times to see if they may smooth out one day
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/ifixit.html
|
7eccbe5a6f4f-5
|
weird, so I brushed pressure on the edges at times to see if they may smooth out one day miraculously). I’m not sure if I accidentally bend the phone when I installed the shell, or, if I got a defective glass protector that messes up the touch sensor. Well, yesterday was the worse day, keeps dropping calls and ghost pressing keys for me when I was on a call. I got fed up, so I removed the screen protector, and so far problems have not reoccurred yet. I’m crossing my fingers that problems indeed solved.\n\n\n\n### Other Answer\nthank you so much for this post! i was struggling doing the reset because i cannot type userids and passwords correctly because the iphone 6 plus i have kept on typing letters incorrectly. I have been doing it for a day until i come across this article. Very helpful! God bless you!!\n\n\n\n### Other Answer\nI just turned it off, and turned it back on.\n\n\n\n### Other Answer\nMy problem has not gone away completely but its better now i changed my charger and turned off prediction ....,,,now it rarely happens\n\n\n\n### Other Answer\nI tried all of the above. I then turned off my home cleaned it with isopropyl alcohol 90%. Then I baked it in my oven on warm for an hour and a half over foil. Took it out and set it cool completely on the glass top stove. Then I turned on and it worked.\n\n\n\n### Other Answer\nI think at& t should man up and fix your phone for free! You pay a lot for a Apple they should back it. I did the next 30 month payments and finally have it paid off in June. My iPad sept. Looking forward to a almost 100 drop in my phone bill! Now this crap!!! Really\n\n\n\n### Other Answer\nIf your phone is
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/ifixit.html
|
7eccbe5a6f4f-6
|
bill! Now this crap!!! Really\n\n\n\n### Other Answer\nIf your phone is JailBroken, suggest downloading a virus. While all my symptoms were similar, there was indeed a virus/malware on the phone which allowed for remote control of my iphone (even while in lock mode). My mistake for buying a third party iphone i suppose. Anyway i have since had the phone restored to factory and everything is working as expected for now. I will of course keep you posted if this changes. Thanks to all for the helpful posts, really helped me narrow a few things down.\n\n\n\n### Other Answer\nWhen my phone was doing this, it ended up being the screen protector that i got from 5 below. I took it off and it stopped. I ordered more protectors from amazon and replaced it\n\n\n\n### Other Answer\niPhone 6 Plus first generation….I had the same issues as all above, apps opening by themselves, self typing, ultra sensitive screen, items jumping around all over….it even called someone on FaceTime twice by itself when I was not in the room…..I thought the phone was toast and i’d have to buy a new one took me a while to figure out but it was the extra cheap block plug I bought at a dollar store for convenience of an extra charging station when I move around the house from den to living room…..cord was fine but bought a new Apple brand block plug…no more problems works just fine now. This issue was a recent event so had to narrow things down to what had changed recently to my phone so I could figure it out.\nI even had the same problem on a laptop with documents opening up by themselves…..a laptop that was plugged in to the same wall plug as my phone charger with the dollar store block plug….until I changed the block plug.\n\n\n\n### Other Answer\nHad the problem: Inherited a 6s Plus from
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/ifixit.html
|
7eccbe5a6f4f-7
|
Other Answer\nHad the problem: Inherited a 6s Plus from my wife. She had no problem with it.\nLooks like it was merely the cheap phone case I purchased on Amazon. It was either pinching the edges or torquing the screen/body of the phone. Problem solved.\n\n\n\n### Other Answer\nI bought my phone on march 6 and it was a brand new, but It sucks me uo because it freezing, shaking and control by itself. I went to the store where I bought this and I told them to replacr it, but they told me I have to pay it because Its about lcd issue. Please help me what other ways to fix it. Or should I try to remove the screen or should I follow your step above.\n\n\n\n### Other Answer\nI tried everything and it seems to come back to needing the original iPhone cable…or at least another 1 that would have come with another iPhone…not the $5 Store fast charging cables. My original cable is pretty beat up - like most that I see - but I’ve been beaten up much MUCH less by sticking with its use! I didn’t find that the casing/shell around it or not made any diff.\n\n\n\n### Other Answer\ngreat now I have to wait one more hour to reset my phone and while I was tryin to connect my phone to my computer the computer also restarted smh does anyone else knows how I can get my phone to work… my problem is I have a black dot on the bottom left of my screen an it wont allow me to touch a certain part of my screen unless I rotate my phone and I know the password but the first number is a 2 and it won\'t let me touch 1,2, or 3 so now I have to find a way to get rid of my password and all of a sudden my phone wants to touch stuff on its own which got
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/ifixit.html
|
7eccbe5a6f4f-8
|
rid of my password and all of a sudden my phone wants to touch stuff on its own which got my phone disabled many times to the point where I have to wait a whole hour and I really need to finish something on my phone today PLEASE HELPPPP\n\n\n\n### Other Answer\nIn my case , iphone 6 screen was faulty. I got it replaced at local repair shop, so far phone is working fine.\n\n\n\n### Other Answer\nthis problem in iphone 6 has many different scenarios and solutions, first try to reconnect the lcd screen to the motherboard again, if didnt solve, try to replace the lcd connector on the motherboard, if not solved, then remains two issues, lcd screen it self or touch IC. in my country some repair shops just change them all for almost 40$ since they dont want to troubleshoot one by one. readers of this comment also should know that partial screen not responding in other iphone models might also have an issue in LCD connector on the motherboard, specially if you lock/unlock screen and screen works again for sometime. lcd connectors gets disconnected lightly from the motherboard due to multiple falls and hits after sometime. best of luck for all\n\n\n\n### Other Answer\nI am facing the same issue whereby these ghost touches type and open apps , I am using an original Iphone cable , how to I fix this issue.\n\n\n\n### Other Answer\nThere were two issues with the phone I had troubles with. It was my dads and turns out he carried it in his pocket. The phone itself had a little bend in it as a result. A little pressure in the opposite direction helped the issue. But it also had a tiny crack in the screen which wasnt obvious, once we added a screen protector this fixed the issues entirely.\n\n\n\n### Other Answer\nI had the same problem with my 64Gb iPhone 6+. Tried a lot of things and eventually downloaded all my images and videos to
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/ifixit.html
|
7eccbe5a6f4f-9
|
iPhone 6+. Tried a lot of things and eventually downloaded all my images and videos to my PC and restarted the phone - problem solved. Been working now for two days.', lookup_str='', metadata={'source': 'https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself', 'title': 'My iPhone 6 is typing and opening apps by itself'}, lookup_index=0)]
loader = IFixitLoader("https://www.ifixit.com/Device/Standard_iPad")
data = loader.load()
data
[Document(page_content="Standard iPad\nThe standard edition of the tablet computer made by Apple.\n== Background Information ==\n\nOriginally introduced in January 2010, the iPad is Apple's standard edition of their tablet computer. In total, there have been ten generations of the standard edition of the iPad.\n\n== Additional Information ==\n\n* [link|https://www.apple.com/ipad-select/|Official Apple Product Page]\n* [link|https://en.wikipedia.org/wiki/IPad#iPad|Official iPad Wikipedia]", lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Standard_iPad', 'title': 'Standard iPad'}, lookup_index=0)]
Searching iFixit using /suggest#
If you’re looking for a more general way to search iFixit based on a keyword or phrase, the /suggest endpoint will return content related to the search term, then the loader will load the content from each of the suggested items and prep and return the documents.
data = IFixitLoader.load_suggestions("Banana")
data
[Document(page_content='Banana\nTasty fruit. Good source of potassium. Yellow.\n== Background Information ==\n\nCommonly misspelled, this wildly popular, phone shaped fruit serves as nutrition and an obstacle to slow down vehicles
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/ifixit.html
|
7eccbe5a6f4f-10
|
this wildly popular, phone shaped fruit serves as nutrition and an obstacle to slow down vehicles racing close behind you. Also used commonly as a synonym for “crazy” or “insane”.\n\nBotanically, the banana is considered a berry, although it isn’t included in the culinary berry category containing strawberries and raspberries. Belonging to the genus Musa, the banana originated in Southeast Asia and Australia. Now largely cultivated throughout South and Central America, bananas are largely available throughout the world. They are especially valued as a staple food group in developing countries due to the banana tree’s ability to produce fruit year round.\n\nThe banana can be easily opened. Simply remove the outer yellow shell by cracking the top of the stem. Then, with the broken piece, peel downward on each side until the fruity components on the inside are exposed. Once the shell has been removed it cannot be put back together.\n\n== Technical Specifications ==\n\n* Dimensions: Variable depending on genetics of the parent tree\n* Color: Variable depending on ripeness, region, and season\n\n== Additional Information ==\n\n[link|https://en.wikipedia.org/wiki/Banana|Wiki: Banana]', lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Banana', 'title': 'Banana'}, lookup_index=0),
Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/ifixit.html
|
7eccbe5a6f4f-11
|
your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)]
previous
HTML
next
Images
Contents
Searching iFixit using /suggest
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/ifixit.html
|
6c59bd39dc92-0
|
.ipynb
.pdf
Azure Blob Storage Container
Contents
Specifying a prefix
Azure Blob Storage Container#
This covers how to load document objects from a container on Azure Blob Storage.
from langchain.document_loaders import AzureBlobStorageContainerLoader
#!pip install azure-storage-blob
loader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>")
loader.load()
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)]
Specifying a prefix#
You can also specify a prefix for more finegrained control over what files to load.
loader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>", prefix="<prefix>")
loader.load()
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]
previous
AZLyrics
next
Azure Blob Storage File
Contents
Specifying a prefix
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_container.html
|
b6b866f0e8e9-0
|
.rst
.pdf
Prompts
Prompts#
The reference guides here all relate to objects for working with Prompts.
PromptTemplates
Example Selector
previous
How to serialize prompts
next
PromptTemplates
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/reference/prompts.html
|
68be0d545c0a-0
|
.rst
.pdf
Utilities
Utilities#
There are a lot of different utilities that LangChain provides integrations for
These guides go over how to use them.
These can largely be grouped into two categories: generic utilities, and then utilities for working with larger text documents.
Generic Utilities
Python REPL
SerpAPI
SearxNG Search
Utilities for working with Documents
Docstore
Text Splitter
Embeddings
VectorStores
previous
API References
next
Python REPL
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/reference/utils.html
|
91a130e3c2e2-0
|
.md
.pdf
Integrations
Integrations#
Besides the installation of this python package, you will also need to install packages and set environment variables depending on which chains you want to use.
Note: the reason these packages are not included in the dependencies by default is that as we imagine scaling this package, we do not want to force dependencies that are not needed.
The following use cases require specific installs and api keys:
OpenAI:
Install requirements with pip install openai
Get an OpenAI api key and either set it as an environment variable (OPENAI_API_KEY) or pass it to the LLM constructor as openai_api_key.
Cohere:
Install requirements with pip install cohere
Get a Cohere api key and either set it as an environment variable (COHERE_API_KEY) or pass it to the LLM constructor as cohere_api_key.
GooseAI:
Install requirements with pip install openai
Get an GooseAI api key and either set it as an environment variable (GOOSEAI_API_KEY) or pass it to the LLM constructor as gooseai_api_key.
Hugging Face Hub
Install requirements with pip install huggingface_hub
Get a Hugging Face Hub api token and either set it as an environment variable (HUGGINGFACEHUB_API_TOKEN) or pass it to the LLM constructor as huggingfacehub_api_token.
Petals:
Install requirements with pip install petals
Get an GooseAI api key and either set it as an environment variable (HUGGINGFACE_API_KEY) or pass it to the LLM constructor as huggingface_api_key.
CerebriumAI:
Install requirements with pip install cerebrium
Get a Cerebrium api key and either set it as an environment variable (CEREBRIUMAI_API_KEY) or pass it to the LLM constructor as cerebriumai_api_key.
PromptLayer:
Install requirements with pip install promptlayer (be sure to be
|
https:///langchain-cn.readthedocs.io/en/latest/reference/integrations.html
|
91a130e3c2e2-1
|
requirements with pip install promptlayer (be sure to be on version 0.1.62 or higher)
Get an API key from promptlayer.com and set it using promptlayer.api_key=<API KEY>
SerpAPI:
Install requirements with pip install google-search-results
Get a SerpAPI api key and either set it as an environment variable (SERPAPI_API_KEY) or pass it to the LLM constructor as serpapi_api_key.
GoogleSearchAPI:
Install requirements with pip install google-api-python-client
Get a Google api key and either set it as an environment variable (GOOGLE_API_KEY) or pass it to the LLM constructor as google_api_key. You will also need to set the GOOGLE_CSE_ID environment variable to your custom search engine id. You can pass it to the LLM constructor as google_cse_id as well.
WolframAlphaAPI:
Install requirements with pip install wolframalpha
Get a Wolfram Alpha api key and either set it as an environment variable (WOLFRAM_ALPHA_APPID) or pass it to the LLM constructor as wolfram_alpha_appid.
NatBot:
Install requirements with pip install playwright
Wikipedia:
Install requirements with pip install wikipedia
Elasticsearch:
Install requirements with pip install elasticsearch
Set up Elasticsearch backend. If you want to do locally, this is a good guide.
FAISS:
Install requirements with pip install faiss for Python 3.7 and pip install faiss-cpu for Python 3.10+.
Manifest:
Install requirements with pip install manifest-ml (Note: this is only available in Python 3.8+ currently).
OpenSearch:
Install requirements with pip install opensearch-py
If you want to set up OpenSearch on your local, here
DeepLake:
Install requirements with pip install deeplake
LlamaCpp:
Install requirements with pip install llama-cpp-python
Download model and convert following llama.cpp
|
https:///langchain-cn.readthedocs.io/en/latest/reference/integrations.html
|
91a130e3c2e2-2
|
requirements with pip install llama-cpp-python
Download model and convert following llama.cpp instructions
Milvus:
Install requirements with pip install pymilvus
In order to setup a local cluster, take a look here.
Zilliz:
Install requirements with pip install pymilvus
To get up and running, take a look here.
If you are using the NLTKTextSplitter or the SpacyTextSplitter, you will also need to install the appropriate models. For example, if you want to use the SpacyTextSplitter, you will need to install the en_core_web_sm model with python -m spacy download en_core_web_sm. Similarly, if you want to use the NLTKTextSplitter, you will need to install the punkt model with python -m nltk.downloader punkt.
previous
Installation
next
API References
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/reference/integrations.html
|
4401ef7cc2b9-0
|
.md
.pdf
Installation
Contents
Official Releases
Installing from source
Installation#
Official Releases#
LangChain is available on PyPi, so to it is easily installable with:
pip install langchain
That will install the bare minimum requirements of LangChain.
A lot of the value of LangChain comes when integrating it with various model providers, datastores, etc.
By default, the dependencies needed to do that are NOT installed.
However, there are two other ways to install LangChain that do bring in those dependencies.
To install modules needed for the common LLM providers, run:
pip install langchain[llms]
To install all modules needed for all integrations, run:
pip install langchain[all]
Note that if you are using zsh, you’ll need to quote square brackets when passing them as an argument to a command, for example:
pip install 'langchain[all]'
Installing from source#
If you want to install from source, you can do so by cloning the repo and running:
pip install -e .
previous
SQL Question Answering Benchmarking: Chinook
next
Integrations
Contents
Official Releases
Installing from source
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/reference/installation.html
|
72e704eb2164-0
|
.rst
.pdf
VectorStores
VectorStores#
Wrappers on top of vector stores.
class langchain.vectorstores.AtlasDB(name: str, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False)[source]#
Wrapper around Atlas: Nomic’s neural database and rhizomatic instrument.
To use, you should have the nomic python package installed.
Example
from langchain.vectorstores import AtlasDB
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = AtlasDB("my_project", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, refresh: bool = True, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) – Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
ids (Optional[List[str]]) – An optional list of ids.
refresh (bool) – Whether or not to refresh indices with the updated data.
Default True.
Returns
List of IDs of the added texts.
Return type
List[str]
create_index(**kwargs: Any) → Any[source]#
Creates an index in your project.
See
https://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index
for full detail.
classmethod from_documents(documents: List[langchain.schema.Document], embedding: Optional[langchain.embeddings.base.Embeddings] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key:
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-1
|
ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, persist_directory: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.vectorstores.atlas.AtlasDB[source]#
Create an AtlasDB vectorstore from a list of documents.
Parameters
name (str) – Name of the collection to create.
api_key (str) – Your nomic API key,
documents (List[Document]) – List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
ids (Optional[List[str]]) – Optional list of document IDs. If None,
ids will be auto created
description (str) – A description for your project.
is_public (bool) – Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) – Whether to reset this project if
it already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) – Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns
Nomic’s neural database and finest rhizomatic instrument
Return type
AtlasDB
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) →
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-2
|
bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.vectorstores.atlas.AtlasDB[source]#
Create an AtlasDB vectorstore from a raw documents.
Parameters
texts (List[str]) – The list of texts to ingest.
name (str) – Name of the project to create.
api_key (str) – Your nomic API key,
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None.
ids (Optional[List[str]]) – Optional list of document IDs. If None,
ids will be auto created
description (str) – A description for your project.
is_public (bool) – Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) – Whether to reset this project if it
already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) – Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns
Nomic’s neural database and finest rhizomatic instrument
Return type
AtlasDB
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Run similarity search with AtlasDB
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
Returns
List of documents most similar to the query text.
Return type
List[Document]
class langchain.vectorstores.Chroma(collection_name: str = 'langchain', embedding_function: Optional[Embeddings] = None, persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, collection_metadata: Optional[Dict] = None)[source]#
Wrapper around ChromaDB embeddings
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-3
|
collection_metadata: Optional[Dict] = None)[source]#
Wrapper around ChromaDB embeddings platform.
To use, you should have the chromadb python package installed.
Example
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Chroma("langchain_store", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) – Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
ids (Optional[List[str]], optional) – Optional list of IDs.
Returns
List of IDs of the added texts.
Return type
List[str]
delete_collection() → None[source]#
Delete the collection.
classmethod from_documents(documents: List[Document], embedding: Optional[Embeddings] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, **kwargs: Any) → Chroma[source]#
Create a Chroma vectorstore from a list of documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
collection_name (str) – Name of the collection to create.
persist_directory (Optional[str]) – Directory to persist the collection.
ids (Optional[List[str]]) – List of document IDs. Defaults to None.
documents (List[Document]) – List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
client_settings
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-4
|
(Optional[Embeddings]) – Embedding function. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) – Chroma client settings
Returns
Chroma vectorstore.
Return type
Chroma
classmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, **kwargs: Any) → Chroma[source]#
Create a Chroma vectorstore from a raw documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
texts (List[str]) – List of texts to add to the collection.
collection_name (str) – Name of the collection to create.
persist_directory (Optional[str]) – Directory to persist the collection.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None.
ids (Optional[List[str]]) – List of document IDs. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) – Chroma client settings
Returns
Chroma vectorstore.
Return type
Chroma
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, filter: Optional[Dict[str, str]] = None) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-5
|
Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param filter: Filter by metadata. Defaults to None.
:type filter: Optional[Dict[str, str]]
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, filter: Optional[Dict[str, str]] = None) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param filter: Filter by metadata. Defaults to None.
:type filter: Optional[Dict[str, str]]
Returns
List of Documents selected by maximal marginal relevance.
persist() → None[source]#
Persist the collection.
This can be used to explicitly persist the data to disk.
It will also be called automatically when the object is destroyed.
similarity_search(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Run similarity search with Chroma.
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of documents most similar to the query text.
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) →
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-6
|
filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Run similarity search with Chroma with distance.
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
class langchain.vectorstores.DeepLake(dataset_path: str = 'mem://langchain', token: Optional[str] = None, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None, read_only: Optional[bool] = False, ingestion_batch_size: int = 1024, num_workers: int = 4)[source]#
Wrapper around Deep Lake, a data lake for deep learning applications.
We implement naive similarity search and filtering for fast prototyping,
but it can be extended with Tensor Query Language (TQL) for production use cases
over billion rows.
Why Deep Lake?
Not only stores embeddings, but also the original data with version control.
Serverless, doesn’t require another service and can be used with majorcloud providers (S3, GCS, etc.)
More than just a multi-modal vector store. You can use the datasetto fine-tune your own LLM models.
To use, you should have the
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-7
|
use the datasetto fine-tune your own LLM models.
To use, you should have the deeplake python package installed.
Example
from langchain.vectorstores import DeepLake
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = DeepLake("langchain_store", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) – Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
ids (Optional[List[str]], optional) – Optional list of IDs.
Returns
List of IDs of the added texts.
Return type
List[str]
delete(ids: Any[List[str], None] = None, filter: Any[Dict[str, str], None] = None, delete_all: Any[bool, None] = None) → bool[source]#
Delete the entities in the dataset
Parameters
ids (Optional[List[str]], optional) – The document_ids to delete.
Defaults to None.
filter (Optional[Dict[str, str]], optional) – The filter to delete by.
Defaults to None.
delete_all (Optional[bool], optional) – Whether to drop the dataset.
Defaults to None.
delete_dataset() → None[source]#
Delete the collection.
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, dataset_path: str = 'mem://langchain', **kwargs: Any) → langchain.vectorstores.deeplake.DeepLake[source]#
Create a Deep Lake dataset from a
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-8
|
langchain.vectorstores.deeplake.DeepLake[source]#
Create a Deep Lake dataset from a raw documents.
If a dataset_path is specified, the dataset will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
path (str, pathlib.Path) –
The full path to the dataset. Can be:
Deep Lake cloud path of the form hub://username/dataset_name.To write to Deep Lake cloud datasets,
ensure that you are logged in to Deep Lake
(use ‘activeloop login’ from command line)
AWS S3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment
Google Cloud Storage path of the form``gcs://bucketname/path/to/dataset``Credentials are required
in either the environment
Local file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset.
In-memory path of the form mem://path/to/dataset which doesn’tsave the dataset, but keeps it in memory instead.
Should be used only for testing as it does not persist.
documents (List[Document]) – List of documents to add.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None.
ids (Optional[List[str]]) – List of document IDs. Defaults to None.
Returns
Deep Lake dataset.
Return type
DeepLake
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-9
|
of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
persist() → None[source]#
Persist the collection.
search(query: Any[str, None] = None, embedding: Any[float, None] = None, k: int = 4, distance_metric: str = 'L2', use_maximal_marginal_relevance: Optional[bool] = False, fetch_k: Optional[int] = 20, filter: Optional[Any[Dict[str, str], Callable, str]] = None, return_score: Optional[bool] = False, **kwargs: Any) → Any[List[Document], List[Tuple[Document, float]]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
embedding – Embedding function to use. Defaults to None.
k – Number of Documents to return. Defaults to 4.
distance_metric – L2 for Euclidean, L1 for Nuclear,
max L-infinity distance, cos for cosine similarity,
‘dot’ for dot product. Defaults to L2.
filter – Attribute filter by metadata example {‘key’: ‘value’}. It can also
filter] (take [Deep Lake) –
(https –
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-10
|
It can also
filter] (take [Deep Lake) –
(https – //docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake.core.dataset.Dataset.filter)
Defaults to None.
maximal_marginal_relevance – Whether to use maximal marginal relevance.
Defaults to False.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
return_score – Whether to return the score. Defaults to False.
Returns
List of Documents selected by the specified distance metric,
if return_score True, return a tuple of (Document, score)
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query – text to embed and run the query on.
k – Number of Documents to return.
Defaults to 4.
query – Text to look up documents similar to.
embedding – Embedding function to use.
Defaults to None.
k – Number of Documents to return.
Defaults to 4.
distance_metric – L2 for Euclidean, L1 for Nuclear, max
L-infinity distance, cos for cosine similarity, ‘dot’ for dot product
Defaults to L2.
filter – Attribute filter by metadata example {‘key’: ‘value’}.
Defaults to None.
maximal_marginal_relevance – Whether to use maximal marginal relevance.
Defaults to False.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
return_score – Whether to return the score. Defaults to False.
Returns
List of Documents most similar to the query vector.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k –
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-11
|
similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, distance_metric: str = 'L2', k: int = 4, filter: Optional[Dict[str, str]] = None) → List[Tuple[langchain.schema.Document, float]][source]#
Run similarity search with Deep Lake with distance returned.
Parameters
query (str) – Query text to search for.
distance_metric – L2 for Euclidean, L1 for Nuclear, max L-infinity
distance, cos for cosine similarity, ‘dot’ for dot product.
Defaults to L2.
k (int) – Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
class langchain.vectorstores.ElasticVectorSearch(elasticsearch_url: str, index_name: str, embedding: langchain.embeddings.base.Embeddings)[source]#
Wrapper around Elasticsearch as a vector database.
To connect to an Elasticsearch instance that does not require
login credentials, pass the Elasticsearch URL and index name along with the
embedding object to the constructor.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url="http://localhost:9200",
index_name="test_index",
embedding=embedding
)
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-12
|
For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the “Deployments” page.
To obtain your Elastic Cloud password for the default “elastic” user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to “Security” > “Users”
Locate the “elastic” user and click “Edit”
Click “Reset password”
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_host = "cluster_id.region_id.gcp.cloud.es.io"
elasticsearch_url = f"https://username:password@{elastic_host}:9243"
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url=elasticsearch_url,
index_name="test_index",
embedding=embedding
)
Parameters
elasticsearch_url (str) – The URL for the Elasticsearch instance.
index_name (str) – The name of the Elasticsearch index for the embeddings.
embedding (Embeddings) – An object that provides the ability to embed text.
It should be an instance of a class that subclasses the Embeddings
abstract base class, such as OpenAIEmbeddings()
Raises
ValueError – If the elasticsearch python package is not installed.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, refresh_indices: bool = True, **kwargs: Any) → List[str][source]#
Run more texts through
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-13
|
bool = True, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
refresh_indices – bool to refresh ElasticSearch indices
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.elastic_vector_search.ElasticVectorSearch[source]#
Construct ElasticVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Elasticsearch instance.
Adds the documents to the newly created Elasticsearch index.
This is intended to be a quick way to get started.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch.from_texts(
texts,
embeddings,
elasticsearch_url="http://localhost:9200"
)
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
class langchain.vectorstores.FAISS(embedding_function: Callable, index: Any, docstore: langchain.docstore.base.Docstore, index_to_docstore_id: Dict[int, str])[source]#
Wrapper around FAISS vector database.
To use, you should have the faiss python package installed.
Example
from langchain import FAISS
faiss
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-14
|
should have the faiss python package installed.
Example
from langchain import FAISS
faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)
add_embeddings(text_embeddings: Iterable[Tuple[str, List[float]]], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
text_embeddings – Iterable pairs of string and embedding to
add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.faiss.FAISS[source]#
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
faiss = FAISS.from_texts(texts, embeddings)
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs:
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-15
|
metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.faiss.FAISS[source]#
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
faiss = FAISS.from_texts(texts, embeddings)
classmethod load_local(folder_path: str, embeddings: langchain.embeddings.base.Embeddings, index_name: str = 'index') → langchain.vectorstores.faiss.FAISS[source]#
Load FAISS index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path – folder path to load index, docstore,
and index_to_docstore_id from.
embeddings – Embeddings to use when generating queries
index_name – for saving with a specific index file name
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-16
|
the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
merge_from(target: langchain.vectorstores.faiss.FAISS) → None[source]#
Merge another FAISS object with the current one.
Add the target FAISS to the current one.
Parameters
target – FAISS object you wish to merge into the current one
Returns
None.
save_local(folder_path: str, index_name: str = 'index') → None[source]#
Save FAISS index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path – folder path to save index, docstore,
and index_to_docstore_id to.
index_name – for saving with a specific index file name
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the embedding.
similarity_search_with_score(query: str, k: int = 4) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-17
|
float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.Milvus(embedding_function: langchain.embeddings.base.Embeddings, connection_args: dict, collection_name: str, text_field: str)[source]#
Wrapper around the Milvus vector database.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, partition_name: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[str][source]#
Insert text data into Milvus.
When using add_texts() it is assumed that a collecton has already
been made and indexed. If metadata is included, it is assumed that
it is ordered correctly to match the schema provided to the Collection
and that the embedding vector is the first schema field.
Parameters
texts (Iterable[str]) – The text being embedded and inserted.
metadatas (Optional[List[dict]], optional) – The metadata that
corresponds to each insert. Defaults to None.
partition_name (str, optional) – The partition of the collection
to insert data into. Defaults to None.
timeout – specified timeout.
Returns
The resulting keys for each inserted element.
Return type
List[str]
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings,
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-18
|
from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.milvus.Milvus[source]#
Create a Milvus collection, indexes it with HNSW, and insert data.
Parameters
texts (List[str]) – Text to insert.
embedding (Embeddings) – Embedding function to use.
metadatas (Optional[List[dict]], optional) – Dict metatadata.
Defaults to None.
Returns
The Milvus vector store.
Return type
VectorStore
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, param: Optional[dict] = None, expr: Optional[str] = None, partition_names: Optional[List[str]] = None, round_decimal: int = - 1, timeout: Optional[int] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Perform a search and return results that are reordered by MMR.
Parameters
query (str) – The text being searched.
k (int, optional) – How many results to give. Defaults to 4.
fetch_k (int, optional) – Total results to select k from.
Defaults to 20.
param (dict, optional) – The search params for the specified index.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
partition_names (List[str], optional) – What partitions to search.
Defaults to None.
round_decimal (int, optional) – Round the resulting distance. Defaults
to -1.
timeout (int, optional) – Amount to wait before timeout error. Defaults
to None.
Returns
Document results for search.
Return type
List[Document]
similarity_search(query: str, k: int = 4, param:
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-19
|
str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, partition_names: Optional[List[str]] = None, round_decimal: int = - 1, timeout: Optional[int] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Perform a similarity search against the query string.
Parameters
query (str) – The text to search.
k (int, optional) – How many results to return. Defaults to 4.
param (dict, optional) – The search params for the index type.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
partition_names (List[str], optional) – What partitions to search.
Defaults to None.
round_decimal (int, optional) – What decimal point to round to.
Defaults to -1.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
Returns
Document results for search.
Return type
List[Document]
similarity_search_with_score(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, partition_names: Optional[List[str]] = None, round_decimal: int = - 1, timeout: Optional[int] = None, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Perform a search on a query string and return results.
Parameters
query (str) – The text being searched.
k (int, optional) – The amount of results ot return. Defaults to 4.
param (dict, optional) – The search params for the specified index.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
partition_names (List[str], optional) – Partitions to search through.
Defaults to None.
round_decimal (int, optional) – Round the resulting distance.
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-20
|
search through.
Defaults to None.
round_decimal (int, optional) – Round the resulting distance. Defaults
to -1.
timeout (int, optional) – Amount to wait before timeout error. Defaults
to None.
kwargs – Collection.search() keyword arguments.
Returns
search_embedding,(Document, distance, primary_field) results.
Return type
List[float], List[Tuple[Document, any, any]]
class langchain.vectorstores.OpenSearchVectorSearch(opensearch_url: str, index_name: str, embedding_function: langchain.embeddings.base.Embeddings, **kwargs: Any)[source]#
Wrapper around OpenSearch as a vector database.
Example
from langchain import OpenSearchVectorSearch
opensearch_vector_search = OpenSearchVectorSearch(
"http://localhost:9200",
"embeddings",
embedding_function
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
bulk_size – Bulk API request count; Default: 500
Returns
List of ids from adding the texts into the vectorstore.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
“vector_field”.
text_field: Document field the text of the document is stored in. Defaults
to “text”.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) → langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch[source]#
Construct OpenSearchVectorSearch wrapper from raw
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-21
|
OpenSearchVectorSearch wrapper from raw documents.
Example
from langchain import OpenSearchVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
opensearch_vector_search = OpenSearchVectorSearch.from_texts(
texts,
embeddings,
opensearch_url="http://localhost:9200"
)
OpenSearch by default supports Approximate Search powered by nmslib, faiss
and lucene engines recommended for large datasets. Also supports brute force
search through Script Scoring and Painless Scripting.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
“vector_field”.
text_field: Document field the text of the document is stored in. Defaults
to “text”.
Optional Keyword Args for Approximate Search:engine: “nmslib”, “faiss”, “hnsw”; default: “nmslib”
space_type: “l2”, “l1”, “cosinesimil”, “linf”, “innerproduct”; default: “l2”
ef_search: Size of the dynamic list used during k-NN searches. Higher values
lead to more accurate but slower searches; default: 512
ef_construction: Size of the dynamic list used during k-NN graph creation.
Higher values lead to more accurate graph but slower indexing speed;
default: 512
m: Number of bidirectional links created for each new element. Large impact
on memory consumption. Between 2 and 100; default: 16
Keyword Args for Script Scoring or Painless Scripting:is_appx_search: False
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
By default supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query – Text to
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-22
|
Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
“vector_field”.
text_field: Document field the text of the document is stored in. Defaults
to “text”.
metadata_field: Document field that metadata is stored in. Defaults to
“metadata”.
Can be set to a special value “*” to include the entire document.
Optional Args for Approximate Search:search_type: “approximate_search”; default: “approximate_search”
size: number of results the query actually returns; default: 4
Optional Args for Script Scoring Search:search_type: “script_scoring”; default: “approximate_search”
space_type: “l2”, “l1”, “linf”, “cosinesimil”, “innerproduct”,
“hammingbit”; default: “l2”
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {“match_all”: {}}
Optional Args for Painless Scripting Search:search_type: “painless_scripting”; default: “approximate_search”
space_type: “l2Squared”, “l1Norm”, “cosineSimilarity”; default: “l2Squared”
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {“match_all”: {}}
class langchain.vectorstores.Pinecone(index: Any, embedding_function: Callable, text_key: str, namespace: Optional[str] = None)[source]#
Wrapper around Pinecone vector database.
To use, you should have the pinecone-client python package installed.
Example
from langchain.vectorstores import Pinecone
from langchain.embeddings.openai import
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-23
|
langchain.vectorstores import Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
index = pinecone.Index("langchain-demo")
embeddings = OpenAIEmbeddings()
vectorstore = Pinecone(index, embeddings.embed_query, "text")
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, namespace: Optional[str] = None, batch_size: int = 32, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of ids to associate with the texts.
namespace – Optional pinecone namespace to add the texts to.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_existing_index(index_name: str, embedding: langchain.embeddings.base.Embeddings, text_key: str = 'text', namespace: Optional[str] = None) → langchain.vectorstores.pinecone.Pinecone[source]#
Load pinecone vectorstore from index name.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 32, text_key: str = 'text', index_name: Optional[str] = None, namespace: Optional[str] = None, **kwargs: Any) → langchain.vectorstores.pinecone.Pinecone[source]#
Construct Pinecone wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Adds the documents to
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-24
|
from raw documents.
This is a user friendly interface that:
Embeds documents.
Adds the documents to a provided Pinecone index
This is intended to be a quick way to get started.
Example
from langchain import Pinecone
from langchain.embeddings import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
embeddings = OpenAIEmbeddings()
pinecone = Pinecone.from_texts(
texts,
embeddings,
index_name="langchain-demo"
)
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Return pinecone documents most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Dictionary of argument(s) to filter on metadata
namespace – Namespace to search in. Default will search in ‘’ namespace.
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None) → List[Tuple[langchain.schema.Document, float]][source]#
Return pinecone documents most similar to query, along with scores.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Dictionary of argument(s) to filter on metadata
namespace – Namespace to search in. Default will search in ‘’ namespace.
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.Qdrant(client: Any, collection_name: str,
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-25
|
for each
class langchain.vectorstores.Qdrant(client: Any, collection_name: str, embedding_function: Callable, content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata')[source]#
Wrapper around Qdrant vector database.
To use you should have the qdrant-client package installed.
Example
from qdrant_client import QdrantClient
from langchain import Qdrant
client = QdrantClient()
collection_name = "MyCollection"
qdrant = Qdrant(client, collection_name, embedding_function)
CONTENT_KEY = 'page_content'#
METADATA_KEY = 'metadata'#
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, location: Optional[str] = None, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, path: Optional[str] = None, collection_name: Optional[str] = None, distance_func: str = 'Cosine', content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', **kwargs: Any) →
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-26
|
= 'page_content', metadata_payload_key: str = 'metadata', **kwargs: Any) → langchain.vectorstores.qdrant.Qdrant[source]#
Construct Qdrant wrapper from a list of texts.
Parameters
texts – A list of texts to be indexed in Qdrant.
embedding – A subclass of Embeddings, responsible for text vectorization.
metadatas – An optional list of metadata. If provided it has to be of the same
length as a list of texts.
location – If :memory: - use in-memory Qdrant instance.
If str - use it as a url parameter.
If None - fallback to relying on host and port parameters.
url – either host or str of “Optional[scheme], host, Optional[port],
Optional[prefix]”. Default: None
port – Port of the REST API interface. Default: 6333
grpc_port – Port of the gRPC interface. Default: 6334
prefer_grpc – If true - use gPRC interface whenever possible in custom methods.
Default: False
https – If true - use HTTPS(SSL) protocol. Default: None
api_key – API key for authentication in Qdrant Cloud. Default: None
prefix – If not None - add prefix to the REST URL path.
Example: service/v1 will result in
http://localhost:6333/service/v1/{qdrant-endpoint} for REST API.
Default: None
timeout – Timeout for REST and gRPC API requests.
Default: 5.0 seconds for REST and unlimited for gRPC
host – Host name of Qdrant service. If url and host are None, set to
‘localhost’. Default: None
path – Path in which the vectors will be stored while using local mode.
Default: None
collection_name – Name of the Qdrant collection to be used. If not provided,
it will be created randomly. Default: None
distance_func –
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-27
|
be used. If not provided,
it will be created randomly. Default: None
distance_func – Distance function. One of: “Cosine” / “Euclid” / “Dot”.
Default: “Cosine”
content_payload_key – A payload key used to store the content of the document.
Default: “page_content”
metadata_payload_key – A payload key used to store the metadata of the document.
Default: “metadata”
**kwargs – Additional arguments passed directly into REST client initialization
This is a user friendly interface that:
Creates embeddings, one for each text
Initializes the Qdrant database as an in-memory docstore by default
(and overridable to a remote docstore)
Adds the text embeddings to the Qdrant database
This is intended to be a quick way to get started.
Example
from langchain import Qdrant
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
qdrant = Qdrant.from_texts(texts, embeddings, "localhost")
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
Returns
List of Documents selected by maximal marginal relevance.
similarity_search(query: str, k: int = 4, filter: Optional[Dict[str, Union[str, int, bool]]] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k –
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-28
|
docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, Union[str, int, bool]]] = None) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query and score for each.
class langchain.vectorstores.VectorStore[source]#
Interface for vector stores.
async aadd_documents(documents: List[langchain.schema.Document], **kwargs: Any) → List[str][source]#
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[langchain.schema.Document], **kwargs: Any) → List[str][source]#
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
abstract add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-29
|
= None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) → langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → langchain.schema.BaseRetriever[source]#
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
classmethod from_documents(documents:
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-30
|
docs most similar to embedding vector.
classmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) → langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from documents and embeddings.
abstract classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
abstract similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int =
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-31
|
most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
class langchain.vectorstores.Weaviate(client: Any, index_name: str, text_key: str, attributes: Optional[List[str]] = None)[source]#
Wrapper around Weaviate vector database.
To use, you should have the weaviate-client python package installed.
Example
import weaviate
from langchain.vectorstores import Weaviate
client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...)
weaviate = Weaviate(client, index_name, text_key)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Upload texts with metadata (properties) to Weaviate.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.weaviate.Weaviate[source]#
Construct Weaviate wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Weaviate instance.
Adds the documents to the newly created Weaviate index.
This is intended to be a quick way to get started.
Example
from langchain.vectorstores.weaviate import Weaviate
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
weaviate = Weaviate.from_texts(
texts,
embeddings,
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
72e704eb2164-32
|
= Weaviate.from_texts(
texts,
embeddings,
weaviate_url="http://localhost:8080"
)
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Look up similar documents by embedding vector in Weaviate.
previous
Embeddings
next
Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/vectorstore.html
|
98fc809fa834-0
|
.rst
.pdf
Python REPL
Python REPL#
Mock Python REPL.
pydantic model langchain.python.PythonREPL[source]#
Simulates a standalone Python REPL.
field globals: Optional[Dict] [Optional] (alias '_globals')#
field locals: Optional[Dict] [Optional] (alias '_locals')#
run(command: str) → str[source]#
Run command with own globals/locals and returns anything printed.
previous
Utilities
next
SerpAPI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/python.html
|
42009fdde5cb-0
|
.rst
.pdf
Example Selector
Example Selector#
Logic for selecting examples to include in prompts.
pydantic model langchain.prompts.example_selector.LengthBasedExampleSelector[source]#
Select examples based on length.
Validators
calculate_example_text_lengths » example_text_lengths
field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#
Prompt template used to format the examples.
field examples: List[dict] [Required]#
A list of the examples that the prompt template expects.
field get_text_length: Callable[[str], int] = <function _get_length_based>#
Function to measure prompt length. Defaults to word count.
field max_length: int = 2048#
Max length for the prompt, beyond which examples are cut.
add_example(example: Dict[str, str]) → None[source]#
Add new example to list.
select_examples(input_variables: Dict[str, str]) → List[dict][source]#
Select which examples to use based on the input lengths.
pydantic model langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector[source]#
ExampleSelector that selects examples based on Max Marginal Relevance.
This was shown to improve performance in this paper:
https://arxiv.org/pdf/2211.13892.pdf
field fetch_k: int = 20#
Number of examples to fetch to rerank.
classmethod from_examples(examples: List[dict], embeddings: langchain.embeddings.base.Embeddings, vectorstore_cls: Type[langchain.vectorstores.base.VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, fetch_k: int = 20, **vectorstore_cls_kwargs: Any) → langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector[source]#
Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples – List of examples to use in the prompt.
embeddings –
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/example_selector.html
|
42009fdde5cb-1
|
on query similarity.
Parameters
examples – List of examples to use in the prompt.
embeddings – An iniialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls – A vector store DB interface class, e.g. FAISS.
k – Number of examples to select
input_keys – If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs – optional kwargs containing url for vector store
Returns
The ExampleSelector instantiated, backed by a vector store.
select_examples(input_variables: Dict[str, str]) → List[dict][source]#
Select which examples to use based on semantic similarity.
pydantic model langchain.prompts.example_selector.SemanticSimilarityExampleSelector[source]#
Example selector that selects examples based on SemanticSimilarity.
field example_keys: Optional[List[str]] = None#
Optional keys to filter examples to.
field input_keys: Optional[List[str]] = None#
Optional keys to filter input to. If provided, the search is based on
the input variables instead of all variables.
field k: int = 4#
Number of examples to select.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
VectorStore than contains information about examples.
add_example(example: Dict[str, str]) → str[source]#
Add new example to vectorstore.
classmethod from_examples(examples: List[dict], embeddings: langchain.embeddings.base.Embeddings, vectorstore_cls: Type[langchain.vectorstores.base.VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, **vectorstore_cls_kwargs: Any) → langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector[source]#
Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples – List of examples to use in the prompt.
embeddings – An initialized embedding API interface, e.g.
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/example_selector.html
|
42009fdde5cb-2
|
of examples to use in the prompt.
embeddings – An initialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls – A vector store DB interface class, e.g. FAISS.
k – Number of examples to select
input_keys – If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs – optional kwargs containing url for vector store
Returns
The ExampleSelector instantiated, backed by a vector store.
select_examples(input_variables: Dict[str, str]) → List[dict][source]#
Select which examples to use based on semantic similarity.
previous
PromptTemplates
next
Chat Prompt Template
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/example_selector.html
|
22a5192fa5fc-0
|
.rst
.pdf
Text Splitter
Text Splitter#
Functionality for splitting text.
class langchain.text_splitter.CharacterTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]#
Implementation of splitting text that looks at characters.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.LatexTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Latex-formatted layout elements.
class langchain.text_splitter.MarkdownTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Markdown-formatted headings.
class langchain.text_splitter.NLTKTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]#
Implementation of splitting text that looks at sentences using NLTK.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.PythonCodeTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Python syntax.
class langchain.text_splitter.RecursiveCharacterTextSplitter(separators: Optional[List[str]] = None, **kwargs: Any)[source]#
Implementation of splitting text that looks at characters.
Recursively tries to split by different characters to find one
that works.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.SpacyTextSplitter(separator: str = '\n\n', pipeline: str = 'en_core_web_sm', **kwargs: Any)[source]#
Implementation of splitting text that looks at sentences using Spacy.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.TextSplitter(chunk_size: int = 4000, chunk_overlap: int = 200, length_function:
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/text_splitter.html
|
22a5192fa5fc-1
|
int = 4000, chunk_overlap: int = 200, length_function: typing.Callable[[str], int] = <built-in function len>)[source]#
Interface for splitting text into chunks.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[langchain.schema.Document][source]#
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → langchain.text_splitter.TextSplitter[source]#
Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → langchain.text_splitter.TextSplitter[source]#
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: List[langchain.schema.Document]) → List[langchain.schema.Document][source]#
Split documents.
abstract split_text(text: str) → List[str][source]#
Split text into multiple components.
class langchain.text_splitter.TokenTextSplitter(encoding_name: str = 'gpt2', allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any)[source]#
Implementation of splitting text that looks at tokens.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
previous
Docstore
next
Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/text_splitter.html
|
43a012cb5dc3-0
|
.rst
.pdf
SearxNG Search
Contents
Quick Start
Searching
Engine Parameters
Search Tips
SearxNG Search#
Utility for using SearxNG meta search API.
SearxNG is a privacy-friendly free metasearch engine that aggregates results from
multiple search engines and databases and
supports the OpenSearch
specification.
More detailes on the installtion instructions here.
For the search API refer to https://docs.searxng.org/dev/search_api.html
Quick Start#
In order to use this utility you need to provide the searx host. This can be done
by passing the named parameter searx_host
or exporting the environment variable SEARX_HOST.
Note: this is the only required parameter.
Then create a searx search instance like this:
from langchain.utilities import SearxSearchWrapper
# when the host starts with `http` SSL is disabled and the connection
# is assumed to be on a private network
searx_host='http://self.hosted'
search = SearxSearchWrapper(searx_host=searx_host)
You can now use the search instance to query the searx API.
Searching#
Use the run() and
results() methods to query the searx API.
Other methods are are available for convenience.
SearxResults is a convenience wrapper around the raw json result.
Example usage of the run method to make a search:
s.run(query="what is the best search engine?")
Engine Parameters#
You can pass any accepted searx search API parameters to the
SearxSearchWrapper instance.
In the following example we are using the
engines and the language parameters:
# assuming the searx host is set as above or exported as an env variable
s = SearxSearchWrapper(engines=['google', 'bing'],
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/searx_search.html
|
43a012cb5dc3-1
|
'bing'],
language='es')
Search Tips#
Searx offers a special
search syntax
that can also be used instead of passing engine parameters.
For example the following query:
s = SearxSearchWrapper("langchain library", engines=['github'])
# can also be written as:
s = SearxSearchWrapper("langchain library !github")
# or even:
s = SearxSearchWrapper("langchain library !gh")
In some situations you might want to pass an extra string to the search query.
For example when the run() method is called by an agent. The search suffix can
also be used as a way to pass extra parameters to searx or the underlying search
engines.
# select the github engine and pass the search suffix
s = SearchWrapper("langchain library", query_suffix="!gh")
s = SearchWrapper("langchain library")
# select github the conventional google search syntax
s.run("large language models", query_suffix="site:github.com")
NOTE: A search suffix can be defined on both the instance and the method level.
The resulting query will be the concatenation of the two with the former taking
precedence.
See SearxNG Configured Engines and
SearxNG Search Syntax
for more details.
Notes
This wrapper is based on the SearxNG fork searxng/searxng which is
better maintained than the original Searx project and offers more features.
Public searxNG instances often use a rate limiter for API usage, so you might want to
use a self hosted instance and disable the rate limiter.
If you are self-hosting an instance you can customize the rate limiter for your
own network as described here.
For a list of public SearxNG instances see https://searx.space/
class
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/searx_search.html
|
43a012cb5dc3-2
|
a list of public SearxNG instances see https://searx.space/
class langchain.utilities.searx_search.SearxResults(data: str)[source]#
Dict like wrapper around search api results.
property answers: Any#
Helper accessor on the json result.
pydantic model langchain.utilities.searx_search.SearxSearchWrapper[source]#
Wrapper for Searx API.
To use you need to provide the searx host by passing the named parameter
searx_host or exporting the environment variable SEARX_HOST.
In some situations you might want to disable SSL verification, for example
if you are running searx locally. You can do this by passing the named parameter
unsecure. You can also pass the host url scheme as http to disable SSL.
Example
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://localhost:8888")
Example with SSL disabled:from langchain.utilities import SearxSearchWrapper
# note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://localhost:8888",
unsecure=True)
Validators
disable_ssl_warnings » unsecure
validate_params » all fields
field aiosession: Optional[Any] = None#
field categories: Optional[List[str]] = []#
field engines: Optional[List[str]] = []#
field headers: Optional[dict] = None#
field k: int = 10#
field params: dict [Optional]#
field query_suffix: Optional[str] = ''#
field searx_host: str = ''#
field unsecure: bool = False#
async
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/searx_search.html
|
43a012cb5dc3-3
|
searx_host: str = ''#
field unsecure: bool = False#
async aresults(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]#
Asynchronously query with json results.
Uses aiohttp. See results for more info.
async arun(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]#
Asynchronously version of run.
results(query: str, num_results: int, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]#
Run query through Searx API and returns the results with metadata.
Parameters
query – The query to search for.
query_suffix – Extra suffix appended to the query.
num_results – Limit the number of results to return.
engines – List of engines to use for the query.
categories – List of categories to use for the query.
**kwargs – extra parameters to pass to the searx API.
Returns
{snippet: The description of the result.
title: The title of the result.
link: The link to the result.
engines: The engines used for the result.
category: Searx category of the result.
}
Return type
Dict with the following keys
run(query: str, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]#
Run query through Searx API and parse results.
You can pass any other params to the searx query API.
Parameters
query – The query to search for.
query_suffix – Extra suffix appended to the query.
engines – List of engines to use
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/searx_search.html
|
43a012cb5dc3-4
|
for.
query_suffix – Extra suffix appended to the query.
engines – List of engines to use for the query.
categories – List of categories to use for the query.
**kwargs – extra parameters to pass to the searx API.
Returns
The result of the query.
Return type
str
Raises
ValueError – If an error occured with the query.
Example
This will make a query to the qwant engine:
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://my.searx.host")
searx.run("what is the weather in France ?", engine="qwant")
# the same result can be achieved using the `!` syntax of searx
# to select the engine using `query_suffix`
searx.run("what is the weather in France ?", query_suffix="!qwant")
previous
SerpAPI
next
Docstore
Contents
Quick Start
Searching
Engine Parameters
Search Tips
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/searx_search.html
|
c170a54aa762-0
|
.rst
.pdf
Docstore
Docstore#
Wrappers on top of docstores.
class langchain.docstore.InMemoryDocstore(_dict: Dict[str, langchain.schema.Document])[source]#
Simple in memory docstore in the form of a dict.
add(texts: Dict[str, langchain.schema.Document]) → None[source]#
Add texts to in memory dictionary.
search(search: str) → Union[str, langchain.schema.Document][source]#
Search via direct lookup.
class langchain.docstore.Wikipedia[source]#
Wrapper around wikipedia API.
search(search: str) → Union[str, langchain.schema.Document][source]#
Try to search for wiki page.
If page exists, return the page summary, and a PageWithLookups object.
If page does not exist, return similar entries.
previous
SearxNG Search
next
Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 18, 2023.
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/docstore.html
|
a350e5b17f7e-0
|
.rst
.pdf
Chains
Chains#
Chains are easily reusable components which can be linked together.
pydantic model langchain.chains.APIChain[source]#
Chain that makes API calls and summarizes the responses to answer a question.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_api_answer_prompt » all fields
validate_api_request_prompt » all fields
field api_answer_chain: LLMChain [Required]#
field api_docs: str [Required]#
field api_request_chain: LLMChain [Required]#
field requests_wrapper: TextRequestsWrapper [Required]#
classmethod from_llm_and_api_docs(llm: langchain.schema.BaseLanguageModel, api_docs: str, headers: Optional[dict] = None, api_url_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url:', template_format='f-string', validate_template=True), api_response_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question', 'api_url', 'api_response'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-1
|
the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url: {api_url}\n\nHere is the response from the API:\n\n{api_response}\n\nSummarize this response to answer the original question.\n\nSummary:', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.api.base.APIChain[source]#
Load chain from just an LLM and the api docs.
pydantic model langchain.chains.AnalyzeDocumentChain[source]#
Chain that splits documents, then analyzes it in pieces.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]#
field text_splitter: langchain.text_splitter.TextSplitter [Optional]#
pydantic model langchain.chains.ChatVectorDBChain[source]#
Chain for chatting with a vector database.
Validators
raise_deprecation » all fields
set_callback_manager » callback_manager
set_verbose » verbose
field search_kwargs: dict [Optional]#
field top_k_docs_for_context: int = 4#
field vectorstore: VectorStore [Required]#
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), qa_prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, chain_type: str = 'stuff',
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-2
|
= None, chain_type: str = 'stuff', **kwargs: Any) → langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain[source]#
Load chain from LLM.
pydantic model langchain.chains.ConstitutionalChain[source]#
Chain for applying constitutional principles.
Example
from langchain.llms import OpenAI
from langchain.chains import LLMChain, ConstitutionalChain
qa_prompt = PromptTemplate(
template="Q: {question} A:",
input_variables=["question"],
)
qa_chain = LLMChain(llm=OpenAI(), prompt=qa_prompt)
constitutional_chain = ConstitutionalChain.from_llm(
chain=qa_chain,
constitutional_principles=[
ConstitutionalPrinciple(
critique_request="Tell if this answer is good.",
revision_request="Give a better answer.",
)
],
)
constitutional_chain.run(question="What is the meaning of life?")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field chain: langchain.chains.llm.LLMChain [Required]#
field constitutional_principles: List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple] [Required]#
field critique_chain: langchain.chains.llm.LLMChain [Required]#
field revision_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, chain: langchain.chains.llm.LLMChain, critique_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-3
|
output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex.
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-4
|
'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I’m not sure that the
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-5
|
orbits, so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and you’re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that is smaller and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request', 'revision'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\nModel:
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-6
|
output_parser=None, partial_variables={}, template='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nRevision request: {revision_request}\n\nRevision: {revision}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique:', example_separator='\n === \n', prefix='Below is conservation between a human and an AI model.', template_format='f-string', validate_template=True), revision_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'},
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-7
|
harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality,
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-8
|
massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and you’re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-9
|
not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that is smaller and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request', 'revision'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nRevision request: {revision_request}\n\nRevision: {revision}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nRevision Request: {revision_request}\n\nRevision:', example_separator='\n === \n', prefix='Below is conservation between a human and an AI model.', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.constitutional_ai.base.ConstitutionalChain[source]#
Create a chain from an LLM.
classmethod get_principles(names: Optional[List[str]] = None) → List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple][source]#
property input_keys: List[str]#
Defines the input keys.
property
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-10
|
input_keys: List[str]#
Defines the input keys.
property output_keys: List[str]#
Defines the output keys.
pydantic model langchain.chains.ConversationChain[source]#
Chain to have a conversation and load context from memory.
Example
from langchain import ConversationChain, OpenAI
conversation = ConversationChain(llm=OpenAI())
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_prompt_input_variables » all fields
field memory: langchain.schema.BaseMemory [Optional]#
Default memory store.
field prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:', template_format='f-string', validate_template=True)#
Default conversation prompt to use.
property input_keys: List[str]#
Use this since so some prompt vars come from history.
pydantic model langchain.chains.ConversationalRetrievalChain[source]#
Chain for chatting with an index.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field max_tokens_limit: Optional[int] = None#
If set, restricts the docs to return from store based on tokens, enforced only
for StuffDocumentChain
field retriever: BaseRetriever [Required]#
Index to connect to.
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, retriever: langchain.schema.BaseRetriever, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-11
|
'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), qa_prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, chain_type: str = 'stuff', **kwargs: Any) → langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain[source]#
Load chain from LLM.
pydantic model langchain.chains.GraphQAChain[source]#
Chain for question-answering against a graph.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field entity_extraction_chain: LLMChain [Required]#
field graph: NetworkxEntityGraph [Required]#
field qa_chain: LLMChain [Required]#
classmethod from_llm(llm: langchain.llms.base.BaseLLM, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="Use the following knowledge triplets to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer:", template_format='f-string', validate_template=True), entity_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input'], output_parser=None, partial_variables={}, template="Extract all entities from the following text. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return.\n\nEXAMPLE\ni'm trying to improve
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-12
|
or NONE if there is nothing of note to return.\n\nEXAMPLE\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I'm working with Sam.\nOutput: Langchain, Sam\nEND OF EXAMPLE\n\nBegin!\n\n{input}\nOutput:", template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.graph_qa.base.GraphQAChain[source]#
Initialize from LLM.
pydantic model langchain.chains.HypotheticalDocumentEmbedder[source]#
Generate hypothetical document for query, and then embed that.
Based on https://arxiv.org/abs/2212.10496
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field base_embeddings: Embeddings [Required]#
field llm_chain: LLMChain [Required]#
combine_embeddings(embeddings: List[List[float]]) → List[float][source]#
Combine embeddings into final embeddings.
embed_documents(texts: List[str]) → List[List[float]][source]#
Call the base embeddings.
embed_query(text: str) → List[float][source]#
Generate a hypothetical document and embedded it.
classmethod from_llm(llm: langchain.llms.base.BaseLLM, base_embeddings: langchain.embeddings.base.Embeddings, prompt_key: str) → langchain.chains.hyde.base.HypotheticalDocumentEmbedder[source]#
Load and use LLMChain for a specific prompt key.
property input_keys: List[str]#
Input keys for Hyde’s LLM chain.
property output_keys: List[str]#
Output keys for Hyde’s LLM chain.
pydantic model
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-13
|
output_keys: List[str]#
Output keys for Hyde’s LLM chain.
pydantic model langchain.chains.LLMBashChain[source]#
Chain that interprets a prompt and executes bash code to perform bash operations.
Example
from langchain import LLMBashChain, OpenAI
llm_bash = LLMBashChain(llm=OpenAI())
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field llm: langchain.schema.BaseLanguageModel [Required]#
LLM wrapper to use.
field prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer. Make sure to reason step by step, using this format:\n\nQuestion: "copy the files in the directory named \'target\' into a new directory at the same level as target called \'myNewDirectory\'"\n\nI need to take the following actions:\n- List all files in the directory\n- Create a new directory\n- Copy the files from the first directory into the second directory\n```bash\nls\nmkdir myNewDirectory\ncp -r target/* myNewDirectory\n```\n\nThat is the format. Begin!\n\nQuestion: {question}', template_format='f-string', validate_template=True)#
pydantic model langchain.chains.LLMChain[source]#
Chain to run queries against LLMs.
Example
from langchain import LLMChain, OpenAI, PromptTemplate
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(
input_variables=["adjective"], template=prompt_template
)
llm = LLMChain(llm=OpenAI(),
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-14
|
template=prompt_template
)
llm = LLMChain(llm=OpenAI(), prompt=prompt)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field llm: BaseLanguageModel [Required]#
field prompt: BasePromptTemplate [Required]#
Prompt object to use.
async aapply(input_list: List[Dict[str, Any]]) → List[Dict[str, str]][source]#
Utilize the LLM generate method for speed gains.
async aapply_and_parse(input_list: List[Dict[str, Any]]) → Sequence[Union[str, List[str], Dict[str, str]]][source]#
Call apply and then parse the results.
async agenerate(input_list: List[Dict[str, Any]]) → langchain.schema.LLMResult[source]#
Generate LLM result from inputs.
apply(input_list: List[Dict[str, Any]]) → List[Dict[str, str]][source]#
Utilize the LLM generate method for speed gains.
apply_and_parse(input_list: List[Dict[str, Any]]) → Sequence[Union[str, List[str], Dict[str, str]]][source]#
Call apply and then parse the results.
async apredict(**kwargs: Any) → str[source]#
Format prompt with kwargs and pass to LLM.
Parameters
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
async apredict_and_parse(**kwargs: Any) → Union[str, List[str], Dict[str, str]][source]#
Call apredict and then parse the results.
async aprep_prompts(input_list: List[Dict[str, Any]]) → Tuple[List[langchain.schema.PromptValue], Optional[List[str]]][source]#
Prepare prompts from inputs.
create_outputs(response: langchain.schema.LLMResult) → List[Dict[str, str]][source]#
Create outputs from response.
classmethod
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-15
|
→ List[Dict[str, str]][source]#
Create outputs from response.
classmethod from_string(llm: langchain.schema.BaseLanguageModel, template: str) → langchain.chains.base.Chain[source]#
Create LLMChain from LLM and template.
generate(input_list: List[Dict[str, Any]]) → langchain.schema.LLMResult[source]#
Generate LLM result from inputs.
predict(**kwargs: Any) → str[source]#
Format prompt with kwargs and pass to LLM.
Parameters
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
predict_and_parse(**kwargs: Any) → Union[str, List[str], Dict[str, str]][source]#
Call predict and then parse the results.
prep_prompts(input_list: List[Dict[str, Any]]) → Tuple[List[langchain.schema.PromptValue], Optional[List[str]]][source]#
Prepare prompts from inputs.
pydantic model langchain.chains.LLMCheckerChain[source]#
Chain for question-answering with self-verification.
Example
from langchain import OpenAI, LLMCheckerChain
llm = OpenAI(temperature=0.7)
checker_chain = LLMCheckerChain(llm=llm)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field check_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\n{assertions}\nFor each assertion, determine whether it is true or false. If it is false, explain why.\n\n', template_format='f-string', validate_template=True)#
field create_draft_answer_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\n\n',
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-16
|
output_parser=None, partial_variables={}, template='{question}\n\n', template_format='f-string', validate_template=True)#
field list_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\n{statement}\nMake a bullet point list of the assumptions you made when producing the above statement.\n\n', template_format='f-string', validate_template=True)#
field llm: langchain.llms.base.BaseLLM [Required]#
LLM wrapper to use.
field revised_answer_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template="{checked_assertions}\n\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\n\nAnswer:", template_format='f-string', validate_template=True)#
Prompt to use when questioning the documents.
pydantic model langchain.chains.LLMMathChain[source]#
Chain that interprets a prompt and executes python code to do math.
Example
from langchain import LLMMathChain, OpenAI
llm_math = LLMMathChain(llm=OpenAI())
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field llm: langchain.schema.BaseLanguageModel [Required]#
LLM wrapper to use.
field prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into Python code that can be executed in Python 3 REPL. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\n```python\n${{Code that solves the problem and prints the solution}}\n```\n```output\n${{Output of running the
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-17
|
problem and prints the solution}}\n```\n```output\n${{Output of running the code}}\n```\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n\n```python\nprint(37593 * 67)\n```\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: {question}\n', template_format='f-string', validate_template=True)#
Prompt to use to translate to python if neccessary.
pydantic model langchain.chains.LLMRequestsChain[source]#
Chain that hits a URL and then uses an LLM to parse results.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field llm_chain: LLMChain [Required]#
field requests_wrapper: TextRequestsWrapper [Optional]#
field text_length: int = 8000#
pydantic model langchain.chains.LLMSummarizationCheckerChain[source]#
Chain for question-answering with self-verification.
Example
from langchain import OpenAI, LLMSummarizationCheckerChain
llm = OpenAI(temperature=0.0)
checker_chain = LLMSummarizationCheckerChain(llm=llm)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field are_all_true_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return "True". If any of the assertions are false, return "False".\n\nHere are some examples:\n===\n\nChecked Assertions: """\n- The sky is red: False\n- Water is made of lava:
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-18
|
Assertions: """\n- The sky is red: False\n- Water is made of lava: False\n- The sun is a star: True\n"""\nResult: False\n\n===\n\nChecked Assertions: """\n- The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n"""\nResult: True\n\n===\n\nChecked Assertions: """\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n"""\nResult: False\n\n===\n\nChecked Assertions:"""\n{checked_assertions}\n"""\nResult:', template_format='f-string', validate_template=True)#
field check_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n"""\n{assertions}\n"""\n\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".\nIf the fact is false, explain why.\n\n', template_format='f-string', validate_template=True)#
field create_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bulleted list.\n\nText:\n"""\n{summary}\n"""\n\nFacts:', template_format='f-string', validate_template=True)#
field llm: langchain.llms.base.BaseLLM [Required]#
LLM wrapper
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-19
|
llm: langchain.llms.base.BaseLLM [Required]#
LLM wrapper to use.
field max_checks: int = 2#
Maximum number of times to check the assertions. Default to double-checking.
field revised_summary_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal Summary:\n"""\n{summary}\n"""\n\nUsing these checked assertions, rewrite the original summary to be completely true.\n\nThe output should have the same structure and formatting as the original summary.\n\nSummary:', template_format='f-string', validate_template=True)#
pydantic model langchain.chains.MapReduceChain[source]#
Map-reduce chain.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field combine_documents_chain: BaseCombineDocumentsChain [Required]#
Chain to use to combine documents.
field text_splitter: TextSplitter [Required]#
Text splitter to use.
classmethod from_params(llm: langchain.llms.base.BaseLLM, prompt: langchain.prompts.base.BasePromptTemplate, text_splitter: langchain.text_splitter.TextSplitter) → langchain.chains.mapreduce.MapReduceChain[source]#
Construct a map-reduce chain that uses the chain for map and reduce.
pydantic model langchain.chains.OpenAIModerationChain[source]#
Pass input through a moderation endpoint.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-20
|
parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chains import OpenAIModerationChain
moderation = OpenAIModerationChain()
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field error: bool = False#
Whether or not to error if bad content was found.
field model_name: Optional[str] = None#
Moderation model name to use.
field openai_api_key: Optional[str] = None#
field openai_organization: Optional[str] = None#
pydantic model langchain.chains.OpenAPIEndpointChain[source]#
Chain interacts with an OpenAPI endpoint using natural language.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field api_operation: APIOperation [Required]#
field api_request_chain: LLMChain [Required]#
field api_response_chain: Optional[LLMChain] = None#
field param_mapping: _ParamMapping [Required]#
field requests: Requests [Optional]#
field return_intermediate_steps: bool = False#
deserialize_json_input(serialized_args: str) → dict[source]#
Use the serialized typescript dictionary.
Resolve the path, query params dict, and optional requestBody dict.
classmethod from_api_operation(operation: langchain.tools.openapi.utils.api_models.APIOperation, llm: langchain.llms.base.BaseLLM, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, return_intermediate_steps: bool = False, raw_response: bool = False, **kwargs: Any) → OpenAPIEndpointChain[source]#
Create an OpenAPIEndpointChain from an operation and a spec.
classmethod from_url_and_method(spec_url: str, path: str, method: str, llm: langchain.llms.base.BaseLLM, requests: Optional[langchain.requests.Requests] = None,
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-21
|
requests: Optional[langchain.requests.Requests] = None, return_intermediate_steps: bool = False, **kwargs: Any) → OpenAPIEndpointChain[source]#
Create an OpenAPIEndpoint from a spec at the specified url.
pydantic model langchain.chains.PALChain[source]#
Implements Program-Aided Language Models.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field get_answer_expr: str = 'print(solution())'#
field llm: BaseLanguageModel [Required]#
field prompt: BasePromptTemplate [Required]#
field python_globals: Optional[Dict[str, Any]] = None#
field python_locals: Optional[Dict[str, Any]] = None#
field return_intermediate_steps: bool = False#
field stop: str = '\n\n'#
classmethod from_colored_object_prompt(llm: langchain.schema.BaseLanguageModel, **kwargs: Any) → langchain.chains.pal.base.PALChain[source]#
Load PAL from colored object prompt.
classmethod from_math_prompt(llm: langchain.schema.BaseLanguageModel, **kwargs: Any) → langchain.chains.pal.base.PALChain[source]#
Load PAL from math prompt.
pydantic model langchain.chains.QAGenerationChain[source]#
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field input_key: str = 'text'#
field k: Optional[int] = None#
field llm_chain: LLMChain [Required]#
field output_key: str = 'questions'#
field text_splitter: TextSplitter = <langchain.text_splitter.RecursiveCharacterTextSplitter object>#
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, **kwargs: Any) → langchain.chains.qa_generation.base.QAGenerationChain[source]#
property input_keys:
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-22
|
→ langchain.chains.qa_generation.base.QAGenerationChain[source]#
property input_keys: List[str]#
Input keys this chain expects.
property output_keys: List[str]#
Output keys this chain expects.
pydantic model langchain.chains.QAWithSourcesChain[source]#
Question answering with sources over documents.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_naming » all fields
pydantic model langchain.chains.RetrievalQA[source]#
Chain for question-answering against an index.
Example
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.faiss import FAISS
vectordb = FAISS(...)
retrievalQA = RetrievalQA.from_llm(llm=OpenAI(), retriever=vectordb)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field retriever: BaseRetriever [Required]#
pydantic model langchain.chains.RetrievalQAWithSourcesChain[source]#
Question-answering with sources over an index.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_naming » all fields
field max_tokens_limit: int = 3375#
Restrict the docs to return from store based on tokens,
enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true
field reduce_k_below_max_tokens: bool = False#
Reduce the number of results to return from store based on tokens limit
field retriever: langchain.schema.BaseRetriever [Required]#
Index to connect to.
pydantic model langchain.chains.SQLDatabaseChain[source]#
Chain for interacting with SQL Database.
Example
from langchain import SQLDatabaseChain, OpenAI, SQLDatabase
db = SQLDatabase(...)
db_chain = SQLDatabaseChain(llm=OpenAI(), database=db)
Validators
set_callback_manager »
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-23
|
= SQLDatabaseChain(llm=OpenAI(), database=db)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field database: SQLDatabase [Required]#
SQL Database to connect to.
field llm: BaseLanguageModel [Required]#
LLM wrapper to use.
field prompt: Optional[BasePromptTemplate] = None#
Prompt to use to translate natural language to SQL.
field return_direct: bool = False#
Whether or not to return the result of querying the SQL table directly.
field return_intermediate_steps: bool = False#
Whether or not to return the intermediate steps along with the final answer.
field top_k: int = 5#
Number of results to return from the query
pydantic model langchain.chains.SQLDatabaseSequentialChain[source]#
Chain for querying SQL database that is a sequential chain.
The chain is as follows:
1. Based on the query, determine which tables to use.
2. Based on those tables, call the normal SQL database chain.
This is useful in cases where the number of tables in the database is large.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field decider_chain: LLMChain [Required]#
field return_intermediate_steps: bool = False#
field sql_chain: SQLDatabaseChain [Required]#
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, database: langchain.sql_database.SQLDatabase, query_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input', 'table_info', 'dialect', 'top_k'], output_parser=None, partial_variables={}, template='Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
a350e5b17f7e-24
|
always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n\nNever query for all the columns from a specific table, only ask for a the few relevant columns given the question.\n\nPay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n\nUse the following format:\n\nQuestion: "Question here"\nSQLQuery: "SQL Query to run"\nSQLResult: "Result of the SQLQuery"\nAnswer: "Final answer here"\n\nOnly use the tables listed below.\n\n{table_info}\n\nQuestion: {input}', template_format='f-string', validate_template=True), decider_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['query', 'table_names'], output_parser=CommaSeparatedListOutputParser(), partial_variables={}, template='Given the below input question and list of potential tables, output a comma separated list of the table names that may be necessary to answer this question.\n\nQuestion: {query}\n\nTable Names: {table_names}\n\nRelevant Table Names:', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.sql_database.base.SQLDatabaseSequentialChain[source]#
Load the necessary chains.
pydantic model langchain.chains.SequentialChain[source]#
Chain where the outputs of one chain feed directly into next.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_chains » all fields
field chains: List[langchain.chains.base.Chain] [Required]#
field input_variables: List[str] [Required]#
field return_all: bool = False#
pydantic model langchain.chains.SimpleSequentialChain[source]#
Simple chain where
|
https:///langchain-cn.readthedocs.io/en/latest/reference/modules/chains.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.