id
stringlengths
14
16
text
stringlengths
29
2.31k
source
stringlengths
57
122
7eb89a485143-541
\r\n Trapp gives Ron a Bear Hug.\r\n \r\n SGT. TRAPP (CONT\'D)\r\n You did good.\r\n \r\n RON STALLWORTH\r\n Sarge. We did good.\r\n \r\n Ron and Flip eyes meet, bonded.\r\n \r\n SGT. TRAPP\r\n Chief wants to see you Guys.\r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-542
Chief wants to see you Guys.\r\n \r\n Flip nudges Ron.\r\n \r\n FLIP\r\n Hey... early promotion?\r\n \r\n Ron smiles.\r\n \r\n INT. OFFICE OF THE CHIEF OF POLICE - DAY\r\n \r\n Ron, Flip, and Sgt. Trapp sit opposite Chief Bridges.\r\n \r\n CHIEF BRIDGES\r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-543
CHIEF BRIDGES\r\n Again, I can\'t commend you enough for\r\n what you\'ve achieved. You know there\r\n was not a Single Cross Burning the\r\n entire time you were involved?\r\n \r\n RON STALLWORTH\r\n I\'m aware.\r\n \r\n CHIEF BRIDGES\r\n But all good things must come to an\r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-544
end...\r\n \r\n Sgt. Trapp shakes his head, resigned.\r\n RON STALLWORTH\r\n What does that mean?\r\n \r\n Ron and Flip look at each other, stunned.\r\n \r\n CHIEF BRIDGES\r\n Budget Cuts.\r\n \r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-545
FLIP\r\n Budget Cuts?\r\n \r\n CHIEF BRIDGES\r\n Inflation... I wish I had a choice.\r\n My hands are tied. Besides, it looks\r\n like there are no longer any tangible\r\n Threats...\r\n \r\n RON STALLWORTH\r\n ...Sounds like we did too good a job.\r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-546
\r\n CHIEF BRIDGES\r\n Not a Bad Legacy to leave.\r\n \r\n Bridges takes a deliberate pause. Then, THE Sucker Punch...\r\n \r\n CHIEF BRIDGES (CONT\'D)\r\n And I need you, Ron Stallworth, to\r\n destroy all Evidence of this\r\n Investigation.\r\n \r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-547
RON STALLWORTH\r\n Excuse me?\r\n \r\n FLIP\r\n This is total Horseshit.\r\n \r\n CHIEF BRIDGES\r\n We prefer that The Public never knew\r\n about this Investigation.\r\n \r\n Ron and Flip are heated. Sgt. Trapp is silent but gutted.\r\n \r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-548
\r\n RON STALLWORTH\r\n If they found out...\r\n \r\n CHIEF BRIDGES\r\n ...Cease all further contact with The\r\n Ku Klux Klan. Effective immediately.\r\n That goes for Flip too. Ron\r\n Stallworth...\r\n \r\n RON STALLWORTH\r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-549
RON STALLWORTH\r\n This is some Fucked up Bullshit.\r\n CHIEF BRIDGES\r\n Take a week off. Go on vacation with\r\n your Girlfriend. We\'ll hold down The\r\n Fort until you get back. Get you\r\n another assignment...Narcotics.\r\n \r\n Ron storms out.\r\n \r\n INT. INTELLIGENCE UNIT - CSPD - DAY\r\n \r\n Ron reflects as he feeds Investigation documents in a\r\n Shredder. The documents shred into pieces. Just then, the\r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-550
Shredder. The documents shred into pieces. Just then, the\r\n Undercover Phone Line rings on Ron\'s desk.\r\n \r\n Ron stares at the Phone, still ringing. He looks at The\r\n Documents in his hand, about to feed them into The Shredder.\r\n Ron stops. Throws The Documents in a Folder. Sweeps some\r\n Folders into his Briefcase. Leaves as The Phone still rings.\r\n \r\n EXT. COLORADO SPRINGS POLICE DEPARTMENT BUILDING - DAY\r\n \r\n Ron is walking fast now, trying to make it out of The\r\n Building with The Evidence but he remembers something.\r\n He stops, turns back.\r\n \r\n INT. INTELLIGENCE DIVISION - CSPD - DAY\r\n \r\n Ron sits at his Desk, on
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-551
\r\n Ron sits at his Desk, on The Undercover Phone Line. Flip,\r\n Jimmy and Sgt. Trapp are behind, both close enough to listen,\r\n giggling.\r\n \r\n RON STALLWORTH\r\n I\'m sorry we didn\'t get to spend more\r\n One-on-One time together.\r\n \r\n INT. DEVIN DAVIS OFFICE - DAY\r\n \r\n INTERCUT RON, FLIP, AND TRAPP WITH DEVIN DAVIS:\r\n \r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-552
DEVIN DAVIS\r\n Well, that tragic event. I had just\r\n met those Fine Brothers in the cause.\r\n \r\n RON STALLWORTH\r\n Our Chapter is just shaken to the\r\n core. And poor Connie not only does\r\n she lose her Husband but she\'s facing\r\n a healthy Prison Sentence.\r\n \r\n DEVIN DAVIS\r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-553
My God. And then there was that one\r\n Nigger Detective who threatened me.\r\n RON STALLWORTH\r\n Goddamn Coloreds sure know how to\r\n spoil a Celebration.\r\n \r\n Flip and Jimmy snort. Ron holds in a Belly-Laugh.\r\n \r\n DEVIN DAVIS\r\n Christ. You can say that again.\r\n \r\n Ron cracks up into his Hand. Sgt. Trapp is wheezing--
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-554
Ron cracks up into his Hand. Sgt. Trapp is wheezing-- his\r\n Face Bright Pink. Flip is laughing hard in the background.\r\n \r\n RON STALLWORTH\r\n Can I ask you something? That Nigger\r\n Detective who gave you a hard time?\r\n Ever get his name?\r\n \r\n DEVIN DAVIS\r\n No, I...\r\n \r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-555
RON STALLWORTH\r\n ...Are-uh you sure you don\'t know who\r\n he is? Are-uh you absolutely sure?\r\n \r\n Davis looks at his Phone. Ron takes out his SMALL NOTE PAD\r\n out revealing a list of Racial epitaphs he had written down\r\n being on this Investigation. He reads from it to Davis on the\r\n phone.\r\n \r\n ANGLE - SPLIT SCREEN\r\n \r\n Ron Stallworth and Devin Davis.\r\n \r\n RON STALLWORTH (CONT\'D)\r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-556
Cuz\' dat Niggah Coon, Gator Bait,\r\n Spade, Spook, Sambo, Spear Flippin\',\r\n Jungle Bunny, Mississippi Wind\r\n Chime...Detective is Ron Stallworth\r\n you Redneck, Racist Peckerwood Small\r\n Dick Motherfucker!!!\r\n \r\n CLICK. Ron SLAM DUNKS THE RECEIVER LIKE SHAQ.\r\n \r\n CLOSE - DEVIN DAVIS\r\n \r\n Devin Davis\'s Jaw Drops.\r\n \r\n INT. INTELLIGENCE DIVISION - CSPD - DAY\r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-557
INT. INTELLIGENCE DIVISION - CSPD - DAY\r\n \r\n THE WHOLE OFFICE EXPLODES IN LAUGHTER. COPS ARE ROLLING ON\r\n THE OFFICE FLOOR.\r\n INT. RON\'S APARTMENT - KITCHEN - NIGHT\r\n \r\n Folders of Evidence sit on The Kitchen Table in a stack in\r\n front of Ron. He sips his Lipton Tea and removes from the\r\n FILES THE\r\n \r\n CLOSE - POLAROID\r\n Ron hugged up, between Devin Davis and Jesse Nayyar. He then\r\n looks at The Klan Membership Card shifting in his hands, his\r\n gaze fixated on the words.\r\n \r\n CLOSE - Ron Stallworth\r\n KKK Member in Good Standing\r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-558
KKK Member in Good Standing\r\n \r\n Patrice comes up from behind.\r\n CLOSE - PATRICE\r\n She pulls out a small handgun from her pocketbook.\r\n \r\n 2 - SHOT - PATRICE AND RON\r\n \r\n PATRICE (O.S.)\r\n Have you Resigned from The KKK?\r\n \r\n RON STALLWORTH\r\n Affirmative.\r\n \r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-559
\r\n PATRICE\r\n Have you handed in your Resignation\r\n as a Undercover Detective for The\r\n Colorado Springs Police Department?\r\n \r\n RON STALLWORTH\r\n Negative. Truth be told I\'ve always\r\n wanted to be a Cop...and I\'m still\r\n for The Liberation for My People.\r\n \r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-560
PATRICE\r\n My Conscience won\'t let me Sleep with\r\n The Enemy.\r\n \r\n RON STALLWORTH\r\n Enemy? I\'m a Black Man that saved\r\n your life.\r\n \r\n PATRICE\r\n You\'re absolutely right, and I Thank\r\n you for it.\r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-561
you for it.\r\n \r\n Patrice Kisses Ron on the cheek. Good Bye. WE HEAR a KNOCK on\r\n Ron\'s DOOR. Ron, who is startled, slowly rises. We HEAR\r\n another KNOCK.\r\n \r\n QUICK FLASHES - of a an OLD TIME KLAN RALLY. Ron moves\r\n quietly to pull out his SERVICE REVOLVER from the COUNTER\r\n DRAWER. WE HEAR ANOTHER KNOCK on the DOOR. Patrice stands\r\n behind him.\r\n \r\n QUICK FLASHES - BLACK BODY HANGING FROM A TREE (STRANGE\r\n FRUIT) Ron slowly moves to the DOOR. Ron has his SERVICE\r\n REVOLVER up and aimed ready to fire. Ron swings open the\r\n DOOR.\r\n ANGLE - HALLWAY\r\n \r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-562
\r\n CU - RON\'S POV\r\n \r\n WE TRACK DOWN THE EMPTY HALLWAY PANNING OUT THE WINDOW.\r\n \r\n CLOSE - RON AND PATRICE\r\n \r\n Looking in the distance: The Rolling Hills surrounding The\r\n Neighborhood lead towards Pike\'s Peak, which sits on the\r\n horizon like a King on A Throne.\r\n \r\n WE SEE: Something Burning.\r\n \r\n CLOSER-- WE SEE a CROSS, its Flames dancing, sending embers\r\n into The BLACK, Colorado Sky.\r\n OMITTED\r\n \r\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-563
\r\n EXT. UVA CAMPUS - NIGHT\r\n \r\n WE SEE FOOTAGE of NEO-NAZIS, ALT RIGHT, THE KLAN, NEO-\r\n CONFEDERATES AND WHITE NATIONALISTS MARCHING, HOLDING UP\r\n THEIR TIKI TORCHES, CHANTING.\r\n \r\n AMERICAN TERRORISTS\r\n YOU WILL NOT REPLACE US!!!\r\n JEWS WILL NOT REPLACE US!!!\r\n BLOOD AND SOIL!!!\r\n \r\n CUT TO
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
7eb89a485143-564
CUT TO BLACK.\r\n \r\n FINI.\r\n\r\n\r\n\n\n\n\nBlacKkKlansman\nWriters : \xa0\xa0Charlie Wachtel\xa0\xa0David Rabinowitz\xa0\xa0Kevin Willmott\xa0\xa0Spike Lee\nGenres : \xa0\xa0Crime\xa0\xa0Drama\nUser Comments\n\n\n\n\n\r\nBack to IMSDb\n\n\n', lookup_str='', metadata={'source': 'https://imsdb.com/scripts/BlacKkKlansman.html'}, lookup_index=0)] previous Images next Markdown By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/imsdb.html
5f22d6b27705-0
.ipynb .pdf PDF Contents Using PyPDF Using Unstructured Retain Elements Fetching remote PDFs using Unstructured Using PDFMiner Using PDFMiner to generate HTML text Using PyMuPDF PDF# This covers how to load pdfs into a document format that we can use downstream. Using PyPDF# Load PDF using pypdf into array of documents, where each document contains the page content and metadata with page number. from langchain.document_loaders import PyPDFLoader loader = PyPDFLoader("example_data/layout-parser-paper.pdf") pages = loader.load_and_split() pages[0] Document(page_content='LayoutParser : A Uni\x0ced Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1( \x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1Allen Institute for AI\[email protected]\n2Brown University\nruochen [email protected]\n3Harvard University\nfmelissadell,jacob carlson [email protected]\n4University of Washington\[email protected]\n5University of Waterloo\[email protected]\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model con\x0cgurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\ne\x0borts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-1
none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser , an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io .\nKeywords: Document Image Analysis ·Deep Learning ·Layout Analysis\n·Character Recognition ·Open Source library ·Toolkit.\n1 Introduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classi\x0ccation [ 11,arXiv:2103.15348v2 [cs.CV] 21 Jun 2021', lookup_str='', metadata={'source': 'example_data/layout-parser-paper.pdf', 'page': '0'}, lookup_index=0) An advantage of this approach is that documents can be retrieved with page numbers. from langchain.vectorstores import FAISS from langchain.embeddings.openai import OpenAIEmbeddings faiss_index = FAISS.from_documents(pages, OpenAIEmbeddings()) docs = faiss_index.similarity_search("How will the community be engaged?", k=2) for doc in docs: print(str(doc.metadata["page"]) + ":", doc.page_content) 9:
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-2
print(str(doc.metadata["page"]) + ":", doc.page_content) 9: 10 Z. Shen et al. Fig. 4: Illustration of (a) the original historical Japanese document with layout detection results and (b) a recreated version of the document image that achieves much better character recognition recall. The reorganization algorithm rearranges the tokens based on the their detected bounding boxes given a maximum allowed height. 4LayoutParser Community Platform Another focus of LayoutParser is promoting the reusability of layout detection models and full digitization pipelines. Similar to many existing deep learning libraries, LayoutParser comes with a community model hub for distributing layout models. End-users can upload their self-trained models to the model hub, and these models can be loaded into a similar interface as the currently available LayoutParser pre-trained models. For example, the model trained on the News Navigator dataset [17] has been incorporated in the model hub. Beyond DL models, LayoutParser also promotes the sharing of entire doc- ument digitization pipelines. For example, sometimes the pipeline requires the combination of multiple DL models to achieve better accuracy. Currently, pipelines are mainly described in academic papers and implementations are often not pub- licly available. To this end, the LayoutParser community platform also enables the sharing of layout pipelines to promote the discussion and reuse of techniques. For each shared pipeline, it has a dedicated project page, with links to the source code, documentation, and an outline of the approaches. A discussion panel is provided for exchanging ideas. Combined with the core LayoutParser library, users can easily build reusable components based on the shared pipelines and apply them to solve their unique problems. 5 Use Cases The core objective of LayoutParser is to make it easier to create both large-scale and light-weight document digitization pipelines. Large-scale document processing 3: 4 Z. Shen et al. Efficient Data
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-3
pipelines. Large-scale document processing 3: 4 Z. Shen et al. Efficient Data AnnotationC u s t o m i z e d M o d e l T r a i n i n gModel Cust omizationDI A Model HubDI A Pipeline SharingCommunity PlatformLa y out Detection ModelsDocument Images T h e C o r e L a y o u t P a r s e r L i b r a r yOCR ModuleSt or age & VisualizationLa y out Data Structur e Fig. 1: The overall architecture of LayoutParser . For an input document image, the core LayoutParser library provides a set of o -the-shelf tools for layout detection, OCR, visualization, and storage, backed by a carefully designed layout data structure. LayoutParser also supports high level customization via ecient layout annotation and model training functions. These improve model accuracy on the target samples. The community platform enables the easy sharing of DIA models and whole digitization pipelines to promote reusability and reproducibility. A collection of detailed documentation, tutorials and exemplar projects make LayoutParser easy to learn and use. AllenNLP [ 8] and transformers [ 34] have provided the community with complete DL-based support for developing and deploying models for general computer vision and natural language processing problems. LayoutParser , on the other hand, specializes speci cally in DIA tasks. LayoutParser is also equipped with a community platform inspired by established model hubs such as Torch Hub [23] andTensorFlow Hub [1]. It enables the sharing of pretrained models as well as full document processing pipelines that are unique to DIA tasks. There have been a variety of document data collections to facilitate the development of DL models. Some examples include PRImA [ 3](magazine layouts), PubLayNet [ 38](academic paper
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-4
[ 3](magazine layouts), PubLayNet [ 38](academic paper layouts), Table Bank [ 18](tables in academic papers), Newspaper Navigator Dataset [ 16,17](newspaper gure layouts) and HJDataset [31](historical Japanese document layouts). A spectrum of models trained on these datasets are currently available in the LayoutParser model zoo to support di erent use cases. 3 The Core LayoutParser Library At the core of LayoutParser is an o -the-shelf toolkit that streamlines DL- based document image analysis. Five components support a simple interface with comprehensive functionalities: 1) The layout detection models enable using pre-trained or self-trained DL models for layout detection with just four lines of code. 2) The detected layout information is stored in carefully engineered Using Unstructured# from langchain.document_loaders import UnstructuredPDFLoader loader = UnstructuredPDFLoader("example_data/layout-parser-paper.pdf") data = loader.load() Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredPDFLoader("example_data/layout-parser-paper.pdf", mode="elements") data = loader.load() data[0] Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 (�), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\[email protected]\n2 Brown University\nruochen [email protected]\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\[email protected]\n5 University of
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-5
University of Washington\[email protected]\n5 University of Waterloo\[email protected]\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis · Deep Learning · Layout Analysis\n· Character Recognition · Open Source library · Toolkit.\n1\nIntroduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-6
range of\ndocument image analysis (DIA) tasks including document image classification [11,\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021\n', lookup_str='', metadata={'file_path': 'example_data/layout-parser-paper.pdf', 'page_number': 1, 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'creator': 'LaTeX with hyperref', 'producer': 'pdfTeX-1.40.21', 'creationDate': 'D:20210622012710Z', 'modDate': 'D:20210622012710Z', 'trapped': '', 'encryption': None}, lookup_index=0) Fetching remote PDFs using Unstructured# This covers how to load online pdfs into a document format that we can use downstream. This can be used for various online pdf sites such as https://open.umn.edu/opentextbooks/textbooks/ and https://arxiv.org/archive/ Note: all other pdf loaders can also be used to fetch remote PDFs, but OnlinePDFLoader is a legacy function, and works specifically with UnstructuredPDFLoader. from langchain.document_loaders import OnlinePDFLoader loader = OnlinePDFLoader("https://arxiv.org/pdf/2302.03803.pdf") data = loader.load() print(data) [Document(page_content='A WEAK ( k, k ) -LEFSCHETZ THEOREM FOR PROJECTIVE TORIC ORBIFOLDS\n\nWilliam D. Montoya\n\nInstituto de Matem´atica, Estat´ıstica e Computa¸c˜ao Cient´ıfica,\n\nIn [3] we proved that, under suitable conditions, on a very general codimension s quasi- smooth intersection
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-7
we proved that, under suitable conditions, on a very general codimension s quasi- smooth intersection subvariety X in a projective toric orbifold P d Σ with d + s = 2 ( k + 1 ) the Hodge conjecture holds, that is, every ( p, p ) -cohomology class, under the Poincar´e duality is a rational linear combination of fundamental classes of algebraic subvarieties of X . The proof of the above-mentioned result relies, for p ≠ d + 1 − s , on a Lefschetz\n\nKeywords: (1,1)- Lefschetz theorem, Hodge conjecture, toric varieties, complete intersection Email: [email protected]\n\ntheorem ([7]) and the Hard Lefschetz theorem for projective orbifolds ([11]). When p = d + 1 − s the proof relies on the Cayley trick, a trick which associates to X a quasi-smooth hypersurface Y in a projective vector bundle, and the Cayley Proposition (4.3) which gives an isomorphism of some primitive cohomologies (4.2) of X and Y . The Cayley trick, following the philosophy of Mavlyutov in [7], reduces results known for quasi-smooth hypersurfaces to quasi-smooth intersection subvarieties. The idea in this paper goes the other way around, we translate some results for quasi-smooth intersection subvarieties to\n\nAcknowledgement. I thank Prof. Ugo Bruzzo and Tiago Fonseca for useful discus- sions. I also acknowledge support from FAPESP postdoctoral grant No. 2019/23499-7.\n\nLet M be a free abelian group of rank d , let N = Hom ( M, Z ) , and N R = N ⊗ Z R
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-8
, let N = Hom ( M, Z ) , and N R = N ⊗ Z R .\n\nif there exist k linearly independent primitive elements e\n\n, . . . , e k ∈ N such that σ = { µ\n\ne\n\n+ ⋯ + µ k e k } . • The generators e i are integral if for every i and any nonnegative rational number µ the product µe i is in N only if µ is an integer. • Given two rational simplicial cones σ , σ ′ one says that σ ′ is a face of σ ( σ ′ < σ ) if the set of integral generators of σ ′ is a subset of the set of integral generators of σ . • A finite set Σ = { σ\n\n, . . . , σ t } of rational simplicial cones is called a rational simplicial complete d -dimensional fan if:\n\nall faces of cones in Σ are in Σ ;\n\nif σ, σ ′ ∈ Σ then σ ∩ σ ′ < σ and σ ∩ σ ′ < σ ′ ;\n\nN R = σ\n\n∪ ⋅ ⋅ ⋅ ∪ σ t .\n\nA rational simplicial complete d -dimensional fan Σ defines a d -dimensional toric variety P d Σ having only orbifold singularities which we assume to be projective. Moreover, T ∶ = N ⊗ Z C ∗ ≃ ( C ∗ ) d is the torus action on P d Σ . We denote by Σ ( i ) the i -dimensional cones\n\nFor a cone σ ∈ Σ, ˆ σ is the set of 1-dimensional cone in Σ that are not contained in σ\n\nand x ˆ σ ∶
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-9
cone in Σ that are not contained in σ\n\nand x ˆ σ ∶ = ∏ ρ ∈ ˆ σ x ρ is the associated monomial in S .\n\nDefinition 2.2. The irrelevant ideal of P d Σ is the monomial ideal B Σ ∶ =< x ˆ σ ∣ σ ∈ Σ > and the zero locus Z ( Σ ) ∶ = V ( B Σ ) in the affine space A d ∶ = Spec ( S ) is the irrelevant locus.\n\nProposition 2.3 (Theorem 5.1.11 [5]) . The toric variety P d Σ is a categorical quotient A d ∖ Z ( Σ ) by the group Hom ( Cl ( Σ ) , C ∗ ) and the group action is induced by the Cl ( Σ ) - grading of S .\n\nNow we give a brief introduction to complex orbifolds and we mention the needed theorems for the next section. Namely: de Rham theorem and Dolbeault theorem for complex orbifolds.\n\nDefinition 2.4. A complex orbifold of complex dimension d is a singular complex space whose singularities are locally isomorphic to quotient singularities C d / G , for finite sub- groups G ⊂ Gl ( d, C ) .\n\nDefinition 2.5. A differential form on a complex orbifold Z is defined locally at z ∈ Z as a G -invariant differential form on C d where G ⊂ Gl ( d, C ) and Z is locally isomorphic to d\n\nRoughly speaking the local geometry of orbifolds reduces to local G -invariant geometry.\n\nWe
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-10
speaking the local geometry of orbifolds reduces to local G -invariant geometry.\n\nWe have a complex of differential forms ( A ● ( Z ) , d ) and a double complex ( A ● , ● ( Z ) , ∂, ¯ ∂ ) of bigraded differential forms which define the de Rham and the Dolbeault cohomology groups (for a fixed p ∈ N ) respectively:\n\n(1,1)-Lefschetz theorem for projective toric orbifolds\n\nDefinition 3.1. A subvariety X ⊂ P d Σ is quasi-smooth if V ( I X ) ⊂ A #Σ ( 1 ) is smooth outside\n\nExample 3.2 . Quasi-smooth hypersurfaces or more generally quasi-smooth intersection sub-\n\nExample 3.2 . Quasi-smooth hypersurfaces or more generally quasi-smooth intersection sub- varieties are quasi-smooth subvarieties (see [2] or [7] for more details).\n\nRemark 3.3 . Quasi-smooth subvarieties are suborbifolds of P d Σ in the sense of Satake in [8]. Intuitively speaking they are subvarieties whose only singularities come from the ambient\n\nProof. From the exponential short exact sequence\n\nwe have a long exact sequence in cohomology\n\nH 1 (O ∗ X ) → H 2 ( X, Z ) → H 2 (O X ) ≃ H 0 , 2 ( X )\n\nwhere the last isomorphisms is due to Steenbrink in [9]. Now, it is enough to prove the commutativity of the next diagram\n\nwhere the last isomorphisms is due to
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-11
the commutativity of the next diagram\n\nwhere the last isomorphisms is due to Steenbrink in [9]. Now,\n\nH 2 ( X, Z ) / / H 2 ( X, O X ) ≃ Dolbeault H 2 ( X, C ) deRham ≃ H 2 dR ( X, C ) / / H 0 , 2 ¯ ∂ ( X )\n\nof the proof follows as the ( 1 , 1 ) -Lefschetz theorem in [6].\n\nRemark 3.5 . For k = 1 and P d Σ as the projective space, we recover the classical ( 1 , 1 ) - Lefschetz theorem.\n\nBy the Hard Lefschetz Theorem for projective orbifolds (see [11] for details) we\n\nBy the Hard Lefschetz Theorem for projective orbifolds (see [11] for details) we get an isomorphism of cohomologies :\n\ngiven by the Lefschetz morphism and since it is a morphism of Hodge structures, we have:\n\nH 1 , 1 ( X, Q ) ≃ H dim X − 1 , dim X − 1 ( X, Q )\n\nCorollary 3.6. If the dimension of X is 1 , 2 or 3 . The Hodge conjecture holds on X\n\nProof. If the dim C X = 1 the result is clear by the Hard Lefschetz theorem for projective orbifolds. The dimension 2 and 3 cases are covered by Theorem 3.5 and the Hard Lefschetz.\n\nCayley trick and Cayley proposition\n\nThe Cayley trick is a way to associate to a quasi-smooth
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-12
and Cayley proposition\n\nThe Cayley trick is a way to associate to a quasi-smooth intersection subvariety a quasi- smooth hypersurface. Let L 1 , . . . , L s be line bundles on P d Σ and let π ∶ P ( E ) → P d Σ be the projective space bundle associated to the vector bundle E = L 1 ⊕ ⋯ ⊕ L s . It is known that P ( E ) is a ( d + s − 1 ) -dimensional simplicial toric variety whose fan depends on the degrees of the line bundles and the fan Σ. Furthermore, if the Cox ring, without considering the grading, of P d Σ is C [ x 1 , . . . , x m ] then the Cox ring of P ( E ) is\n\nMoreover for X a quasi-smooth intersection subvariety cut off by f 1 , . . . , f s with deg ( f i ) = [ L i ] we relate the hypersurface Y cut off by F = y 1 f 1 + ⋅ ⋅ ⋅ + y s f s which turns out to be quasi-smooth. For more details see Section 2 in [7].\n\nWe will denote P ( E ) as P d + s − 1 Σ ,X to keep track of its relation with X and P d Σ .\n\nThe following is a key remark.\n\nRemark 4.1 . There is a morphism ι ∶ X → Y ⊂ P d + s − 1 Σ ,X . Moreover every point z ∶ = ( x, y ) ∈ Y with y ≠ 0 has a preimage. Hence for any subvariety W = V ( I W ) ⊂ X ⊂ P d Σ there exists W ′
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-13
= V ( I W ) ⊂ X ⊂ P d Σ there exists W ′ ⊂ Y ⊂ P d + s − 1 Σ ,X such that π ( W ′ ) = W , i.e., W ′ = { z = ( x, y ) ∣ x ∈ W } .\n\nFor X ⊂ P d Σ a quasi-smooth intersection variety the morphism in cohomology induced by the inclusion i ∗ ∶ H d − s ( P d Σ , C ) → H d − s ( X, C ) is injective by Proposition 1.4 in [7].\n\nDefinition 4.2. The primitive cohomology of H d − s prim ( X ) is the quotient H d − s ( X, C )/ i ∗ ( H d − s ( P d Σ , C )) and H d − s prim ( X, Q ) with rational coefficients.\n\nH d − s ( P d Σ , C ) and H d − s ( X, C ) have pure Hodge structures, and the morphism i ∗ is com- patible with them, so that H d − s prim ( X ) gets a pure Hodge structure.\n\nThe next Proposition is the Cayley proposition.\n\nProposition 4.3. [Proposition 2.3 in [3] ] Let X = X 1 ∩⋅ ⋅ ⋅∩ X s be a quasi-smooth intersec- tion subvariety in P d Σ cut off by homogeneous polynomials f 1 . . . f s . Then for p ≠ d + s − 1 2 , d + s − 3 2\n\nRemark 4.5 . The above isomorphisms are
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-14
− 3 2\n\nRemark 4.5 . The above isomorphisms are also true with rational coefficients since H ● ( X, C ) = H ● ( X, Q ) ⊗ Q C . See the beginning of Section 7.1 in [10] for more details.\n\nTheorem 5.1. Let Y = { F = y 1 f 1 + ⋯ + y k f k = 0 } ⊂ P 2 k + 1 Σ ,X be the quasi-smooth hypersurface associated to the quasi-smooth intersection surface X = X f 1 ∩ ⋅ ⋅ ⋅ ∩ X f k ⊂ P k + 2 Σ . Then on Y the Hodge conjecture holds.\n\nthe Hodge conjecture holds.\n\nProof. If H k,k prim ( X, Q ) = 0 we are done. So let us assume H k,k prim ( X, Q ) ≠ 0. By the Cayley proposition H k,k prim ( Y, Q ) ≃ H 1 , 1 prim ( X, Q ) and by the ( 1 , 1 ) -Lefschetz theorem for projective\n\ntoric orbifolds there is a non-zero algebraic basis λ C 1 , . . . , λ C n with rational coefficients of H 1 , 1 prim ( X, Q ) , that is, there are n ∶ = h 1 , 1 prim ( X, Q ) algebraic curves C 1 , . . . , C n in X such that under the Poincar´e duality the class in homology [ C i ] goes to λ C i , [ C i ] ↦ λ C i . Recall that the Cox ring of P k + 2
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-15
C i ] ↦ λ C i . Recall that the Cox ring of P k + 2 is contained in the Cox ring of P 2 k + 1 Σ ,X without considering the grading. Considering the grading we have that if α ∈ Cl ( P k + 2 Σ ) then ( α, 0 ) ∈ Cl ( P 2 k + 1 Σ ,X ) . So the polynomials defining C i ⊂ P k + 2 Σ can be interpreted in P 2 k + 1 X, Σ but with different degree. Moreover, by Remark 4.1 each C i is contained in Y = { F = y 1 f 1 + ⋯ + y k f k = 0 } and\n\nfurthermore it has codimension k .\n\nClaim: { C i } ni = 1 is a basis of prim ( ) . It is enough to prove that λ C i is different from zero in H k,k prim ( Y, Q ) or equivalently that the cohomology classes { λ C i } ni = 1 do not come from the ambient space. By contradiction, let us assume that there exists a j and C ⊂ P 2 k + 1 Σ ,X such that λ C ∈ H k,k ( P 2 k + 1 Σ ,X , Q ) with i ∗ ( λ C ) = λ C j or in terms of homology there exists a ( k + 2 ) -dimensional algebraic subvariety V ⊂ P 2 k + 1 Σ ,X such that V ∩ Y = C j so they are equal as a homology class of P 2 k + 1 Σ ,X ,i.e., [ V ∩ Y ] = [ C j ] . It is
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-16
,X ,i.e., [ V ∩ Y ] = [ C j ] . It is easy to check that π ( V ) ∩ X = C j as a subvariety of P k + 2 Σ where π ∶ ( x, y ) ↦ x . Hence [ π ( V ) ∩ X ] = [ C j ] which is equivalent to say that λ C j comes from P k + 2 Σ which contradicts the choice of [ C j ] .\n\nRemark 5.2 . Into the proof of the previous theorem, the key fact was that on X the Hodge conjecture holds and we translate it to Y by contradiction. So, using an analogous argument we have:\n\nargument we have:\n\nProposition 5.3. Let Y = { F = y 1 f s +⋯+ y s f s = 0 } ⊂ P 2 k + 1 Σ ,X be the quasi-smooth hypersurface associated to a quasi-smooth intersection subvariety X = X f 1 ∩ ⋅ ⋅ ⋅ ∩ X f s ⊂ P d Σ such that d + s = 2 ( k + 1 ) . If the Hodge conjecture holds on X then it holds as well on Y .\n\nCorollary 5.4. If the dimension of Y is 2 s − 1 , 2 s or 2 s + 1 then the Hodge conjecture holds on Y .\n\nProof. By Proposition 5.3 and Corollary 3.6.\n\n[\n\n] Angella, D. Cohomologies of certain orbifolds. Journal of Geometry and Physics\n\n(\n\n),\n\n–\n\n[\n\n] Batyrev, V. V., and Cox, D.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-17
Batyrev, V. V., and Cox, D. A. On the Hodge structure of projective hypersur- faces in toric varieties. Duke Mathematical Journal\n\n,\n\n(Aug\n\n). [\n\n] Bruzzo, U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. S˜ao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (\n\n). [\n\n] Caramello Jr, F. C. Introduction to orbifolds. a\n\niv:\n\nv\n\n(\n\n). [\n\n] Cox, D., Little, J., and Schenck, H. Toric varieties, vol.\n\nAmerican Math- ematical Soc.,\n\n[\n\n] Griffiths, P., and Harris, J. Principles of Algebraic Geometry. John Wiley & Sons, Ltd,\n\n[\n\n] Mavlyutov, A. R. Cohomology of complete intersections in toric varieties. Pub- lished in Pacific J. of Math.\n\nNo.\n\n(\n\n),\n\n–\n\n[\n\n] Satake, I. On a Generalization of the Notion of Manifold. Proceedings of the National Academy of Sciences of the United States of America\n\n,\n\n(\n\n),\n\n–\n\n[\n\n] Steenbrink, J. H. M. Intersection form for quasi-homogeneous singularities. Com- positio Mathematica\n\n,\n\n(\n\n),\n\n–\n\n[\n\n] Voisin, C. Hodge Theory and Complex Algebraic Geometry I, vol.\n\nof Cambridge Studies in Advanced Mathematics . Cambridge University Press,\n\n[\n\n] Wang,
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-18
Cambridge Studies in Advanced Mathematics . Cambridge University Press,\n\n[\n\n] Wang, Z. Z., and Zaffran, D. A remark on the Hard Lefschetz theorem for K¨ahler orbifolds. Proceedings of the American Mathematical Society\n\n,\n\n(Aug\n\n).\n\n[2] Batyrev, V. V., and Cox, D. A. On the Hodge structure of projective hypersur- faces in toric varieties. Duke Mathematical Journal 75, 2 (Aug 1994).\n\n[\n\n] Bruzzo, U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. S˜ao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (\n\n).\n\n[3] Bruzzo, U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. S˜ao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (2021).\n\nA. R. Cohomology of complete intersections in toric varieties. Pub-', lookup_str='', metadata={'source': '/var/folders/ph/hhm7_zyx4l13k3v8z02dwp1w0000gn/T/tmpgq0ckaja/online_file.pdf'}, lookup_index=0)] Using PDFMiner# from langchain.document_loaders import PDFMinerLoader loader = PDFMinerLoader("example_data/layout-parser-paper.pdf") data = loader.load() Using PDFMiner to generate HTML text# This can be helpful for chunking texts semantically into sections as the output html content can be parsed via BeautifulSoup to get more structured and rich information about font size, page numbers, pdf headers/footers, etc. from
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-19
structured and rich information about font size, page numbers, pdf headers/footers, etc. from langchain.document_loaders import PDFMinerPDFasHTMLLoader loader = PDFMinerPDFasHTMLLoader("example_data/layout-parser-paper.pdf") data = loader.load() Using PyMuPDF# This is the fastest of the PDF parsing options, and contains detailed metadata about the PDF and its pages, as well as returns one document per page. from langchain.document_loaders import PyMuPDFLoader loader = PyMuPDFLoader("example_data/layout-parser-paper.pdf") data = loader.load() data[0] Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 (�), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\[email protected]\n2 Brown University\nruochen [email protected]\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\[email protected]\n5 University of Waterloo\[email protected]\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-20
challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis · Deep Learning · Layout Analysis\n· Character Recognition · Open Source library · Toolkit.\n1\nIntroduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [11,\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021\n', lookup_str='', metadata={'file_path': 'example_data/layout-parser-paper.pdf', 'page_number': 1, 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'creator': 'LaTeX with hyperref', 'producer': 'pdfTeX-1.40.21', 'creationDate': 'D:20210622012710Z', 'modDate': 'D:20210622012710Z', 'trapped': '', 'encryption': None}, lookup_index=0) Additionally, you can pass along any
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
5f22d6b27705-21
'', 'encryption': None}, lookup_index=0) Additionally, you can pass along any of the options from the PyMuPDF documentation as keyword arguments in the load call, and it will be pass along to the get_text() call. previous Obsidian next PowerPoint Contents Using PyPDF Using Unstructured Retain Elements Fetching remote PDFs using Unstructured Using PDFMiner Using PDFMiner to generate HTML text Using PyMuPDF By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/pdf.html
ab69f1f1d890-0
.ipynb .pdf Telegram Telegram# This notebook covers how to load data from Telegram into a format that can be ingested into LangChain. from langchain.document_loaders import TelegramChatLoader loader = TelegramChatLoader("example_data/telegram.json") loader.load() [Document(page_content="Henry on 2020-01-01T00:00:02: It's 2020...\n\nHenry on 2020-01-01T00:00:04: Fireworks!\n\nGrace 🧤 ðŸ\x8d’ on 2020-01-01T00:00:05: You're a minute late!\n\n", lookup_str='', metadata={'source': 'example_data/telegram.json'}, lookup_index=0)] previous Subtitle Files next Unstructured File Loader By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/telegram.html
6dae9694846c-0
.ipynb .pdf DuckDB Loader Contents Specifying Which Columns are Content vs Metadata Adding Source to Metadata DuckDB Loader# Load a DuckDB query with one document per row. from langchain.document_loaders import DuckDBLoader %%file example.csv Team,Payroll Nationals,81.34 Reds,82.20 Writing example.csv loader = DuckDBLoader("SELECT * FROM read_csv_auto('example.csv')") data = loader.load() print(data) [Document(page_content='Team: Nationals\nPayroll: 81.34', metadata={}), Document(page_content='Team: Reds\nPayroll: 82.2', metadata={})] Specifying Which Columns are Content vs Metadata# loader = DuckDBLoader( "SELECT * FROM read_csv_auto('example.csv')", page_content_columns=["Team"], metadata_columns=["Payroll"] ) data = loader.load() print(data) [Document(page_content='Team: Nationals', metadata={'Payroll': 81.34}), Document(page_content='Team: Reds', metadata={'Payroll': 82.2})] Adding Source to Metadata# loader = DuckDBLoader( "SELECT Team, Payroll, Team As source FROM read_csv_auto('example.csv')", metadata_columns=["source"] ) data = loader.load() print(data) [Document(page_content='Team: Nationals\nPayroll: 81.34\nsource: Nationals', metadata={'source': 'Nationals'}), Document(page_content='Team: Reds\nPayroll: 82.2\nsource: Reds', metadata={'source': 'Reds'})] previous Directory Loader next Email Contents Specifying Which Columns are Content vs Metadata Adding Source to Metadata By Harrison Chase © Copyright 2023, Harrison Chase.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/duckdb.html
6dae9694846c-1
© Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/duckdb.html
2ba5d0680062-0
.ipynb .pdf Obsidian Obsidian# This notebook covers how to load documents from an Obsidian database. Since Obsidian is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory. from langchain.document_loaders import ObsidianLoader loader = ObsidianLoader("<path-to-obsidian>") docs = loader.load() previous Notion DB Loader next PDF By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/obsidian.html
fc0158204d1d-0
.ipynb .pdf YouTube Contents Add video info YouTube loader from Google Cloud Prerequisites 🧑 Instructions for ingesting your Google Docs data YouTube# How to load documents from YouTube transcripts. from langchain.document_loaders import YoutubeLoader # !pip install youtube-transcript-api loader = YoutubeLoader.from_youtube_url("https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True) loader.load() Add video info# # ! pip install pytube loader = YoutubeLoader.from_youtube_url("https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True) loader.load() YouTube loader from Google Cloud# Prerequisites# Create a Google Cloud project or use an existing project Enable the Youtube Api Authorize credentials for desktop app pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib youtube-transcript-api 🧑 Instructions for ingesting your Google Docs data# By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_file keyword argument. Same thing with token.json. Note that token.json will be created automatically the first time you use the loader. GoogleApiYoutubeLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL: Note depending on your set up, the service_account_path needs to be set up. See here for more details. from langchain.document_loaders import GoogleApiClient, GoogleApiYoutubeLoader # Init the GoogleApiClient from pathlib import Path google_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json")) # Use a Channel youtube_loader_channel = GoogleApiYoutubeLoader(google_api_client=google_api_client, channel_name="Reducible",captions_language="en") # Use Youtube
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/youtube.html
fc0158204d1d-1
channel_name="Reducible",captions_language="en") # Use Youtube Ids youtube_loader_ids = GoogleApiYoutubeLoader(google_api_client=google_api_client, video_ids=["TrdevFK_am4"], add_video_info=True) # returns a list of Documents youtube_loader_channel.load() previous Word Documents next Text Splitters Contents Add video info YouTube loader from Google Cloud Prerequisites 🧑 Instructions for ingesting your Google Docs data By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/youtube.html
4c307ba02b0e-0
.ipynb .pdf Notebook Notebook# This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain. from langchain.document_loaders import NotebookLoader loader = NotebookLoader("example_data/notebook.ipynb", include_outputs=True, max_output_length=20, remove_newline=True) NotebookLoader.load() loads the .ipynb notebook file into a Document object. Parameters: include_outputs (bool): whether to include cell outputs in the resulting document (default is False). max_output_length (int): the maximum number of characters to include from each cell output (default is 10). remove_newline (bool): whether to remove newline characters from the cell sources and outputs (default is False). traceback (bool): whether to include full traceback (default is False). loader.load() [Document(page_content='\'markdown\' cell: \'[\'# Notebook\', \'\', \'This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.\']\'\n\n \'code\' cell: \'[\'from langchain.document_loaders import NotebookLoader\']\'\n\n \'code\' cell: \'[\'loader = NotebookLoader("example_data/notebook.ipynb")\']\'\n\n \'markdown\' cell: \'[\'`NotebookLoader.load()` loads the `.ipynb` notebook file into a `Document` object.\', \'\', \'**Parameters**:\', \'\', \'* `include_outputs` (bool): whether to include cell outputs in the resulting document (default is False).\', \'* `max_output_length` (int): the maximum number of characters to include from each cell output (default is 10).\', \'* `remove_newline` (bool): whether to remove newline characters from the cell sources and outputs (default is False).\', \'*
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/notebook.html
4c307ba02b0e-1
whether to remove newline characters from the cell sources and outputs (default is False).\', \'* `traceback` (bool): whether to include full traceback (default is False).\']\'\n\n \'code\' cell: \'[\'loader.load(include_outputs=True, max_output_length=20, remove_newline=True)\']\'\n\n', lookup_str='', metadata={'source': 'example_data/notebook.ipynb'}, lookup_index=0)] previous Markdown next Notion By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/notebook.html
e25e6e3331bd-0
.ipynb .pdf ReadTheDocs Documentation ReadTheDocs Documentation# This notebook covers how to load content from html that was generated as part of a Read-The-Docs build. For an example of this in the wild, see here. This assumes that the html has already been scraped into a folder. This can be done by uncommenting and running the following command #!wget -r -A.html -P rtdocs https://langchain.readthedocs.io/en/latest/ from langchain.document_loaders import ReadTheDocsLoader loader = ReadTheDocsLoader("rtdocs") docs = loader.load() previous PowerPoint next Roam By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/readthedocs_documentation.html
87c144ef3acb-0
.ipynb .pdf Blackboard Blackboard# This covers how to load data from a Blackboard Learn instance. from langchain.document_loaders import BlackboardLoader loader = BlackboardLoader( blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1", bbrouter="expires:12345...", load_all_recursively=True, ) documents = loader.load() previous Bilibili next College Confidential By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/blackboard.html
f92cd77d7bb4-0
.ipynb .pdf Notion DB Loader Contents Requirements Setup 1. Create a Notion Table Database 2. Create a Notion Integration 3. Connect the Integration to the Database 4. Get the Database ID Usage Notion DB Loader# NotionDBLoader is a Python class for loading content from a Notion database. It retrieves pages from the database, reads their content, and returns a list of Document objects. Requirements# A Notion Database Notion Integration Token Setup# 1. Create a Notion Table Database# Create a new table database in Notion. You can add any column to the database and they will be treated as metadata. For example you can add the following columns: Title: set Title as the default property. Categories: A Multi-select property to store categories associated with the page. Keywords: A Multi-select property to store keywords associated with the page. Add your content to the body of each page in the database. The NotionDBLoader will extract the content and metadata from these pages. 2. Create a Notion Integration# To create a Notion Integration, follow these steps: Visit the (Notion Developers)[https://www.notion.com/my-integrations] page and log in with your Notion account. Click on the “+ New integration” button. Give your integration a name and choose the workspace where your database is located. Select the require capabilities, this extension only need the Read content capability Click the “Submit” button to create the integration. Once the integration is created, you’ll be provided with an Integration Token (API key). Copy this token and keep it safe, as you’ll need it to use the NotionDBLoader. 3. Connect the Integration to the Database# To connect your integration to the database, follow these steps: Open your database in Notion. Click on the three-dot menu icon in the top right corner of the database view. Click on the
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/notiondb.html
f92cd77d7bb4-1
on the three-dot menu icon in the top right corner of the database view. Click on the “+ New integration” button. Find your integration, you may need to start typing its name in the search box. Click on the “Connect” button to connect the integration to the database. 4. Get the Database ID# To get the database ID, follow these steps: Open your database in Notion. Click on the three-dot menu icon in the top right corner of the database view. Select “Copy link” from the menu to copy the database URL to your clipboard. The database ID is the long string of alphanumeric characters found in the URL. It typically looks like this: https://www.notion.so/username/8935f9d140a04f95a872520c4f123456?v=…. In this example, the database ID is 8935f9d140a04f95a872520c4f123456. With the database properly set up and the integration token and database ID in hand, you can now use the NotionDBLoader code to load content and metadata from your Notion database. Usage# NotionDBLoader is part of the langchain package’s document loaders. You can use it as follows: from getpass import getpass NOTION_TOKEN = getpass() DATABASE_ID = getpass() ········ ········ from langchain.document_loaders import NotionDBLoader loader = NotionDBLoader(NOTION_TOKEN, DATABASE_ID) docs = loader.load() print(docs) previous Notion next Obsidian Contents Requirements Setup 1. Create a Notion Table Database 2. Create a Notion Integration 3. Connect the Integration to the Database 4. Get the Database ID Usage By Harrison Chase © Copyright 2023, Harrison Chase.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/notiondb.html
f92cd77d7bb4-2
© Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/notiondb.html
c913c76b6d97-0
.ipynb .pdf CoNLL-U CoNLL-U# This is an example of how to load a file in CoNLL-U format. The whole file is treated as one document. The example data (conllu.conllu) is based on one of the standard UD/CoNLL-U examples. from langchain.document_loaders import CoNLLULoader loader = CoNLLULoader("example_data/conllu.conllu") document = loader.load() document previous Document Loaders next Airbyte JSON By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/CoNLL-U.html
6a114a592dde-0
.ipynb .pdf s3 File s3 File# This covers how to load document objects from an s3 file object. from langchain.document_loaders import S3FileLoader #!pip install boto3 loader = S3FileLoader("testing-hwc", "fake.docx") loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)] previous s3 Directory next Sitemap Loader By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/s3_file.html
103f14aa76da-0
.ipynb .pdf Google Drive Contents Prerequisites 🧑 Instructions for ingesting your Google Docs data Google Drive# This notebook covers how to load documents from Google Drive. Currently, only Google Docs are supported. Prerequisites# Create a Google Cloud project or use an existing project Enable the Google Drive API Authorize credentials for desktop app pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib 🧑 Instructions for ingesting your Google Docs data# By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_file keyword argument. Same thing with token.json. Note that token.json will be created automatically the first time you use the loader. GoogleDriveLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL: Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is "1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5" Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is "1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw" from langchain.document_loaders import GoogleDriveLoader loader = GoogleDriveLoader(folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5") docs = loader.load() previous GitBook next Gutenberg Contents Prerequisites 🧑 Instructions for ingesting your Google
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/googledrive.html
103f14aa76da-1
Contents Prerequisites 🧑 Instructions for ingesting your Google Docs data By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/googledrive.html
f4b72502c1fe-0
.ipynb .pdf Markdown Contents Retain Elements Markdown# This covers how to load markdown documents into a document format that we can use downstream. from langchain.document_loaders import UnstructuredMarkdownLoader loader = UnstructuredMarkdownLoader("../../../../README.md") data = loader.load() data [Document(page_content="ð\x9f¦\x9cï¸\x8fð\x9f”\x97 LangChain\n\nâ\x9a¡ Building applications with LLMs through composability â\x9a¡\n\nProduction Support: As you move your LangChains into production, we'd love to offer more comprehensive support.\nPlease fill out this form and we'll set up a dedicated support Slack channel.\n\nQuick Install\n\npip install langchain\n\nð\x9f¤” What is this?\n\nLarge language models (LLMs) are emerging as a transformative technology, enabling\ndevelopers to build applications that they previously could not.\nBut using these LLMs in isolation is often not enough to\ncreate a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.\n\nThis library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:\n\nâ\x9d“ Question Answering over specific documents\n\nDocumentation\n\nEnd-to-end Example: Question Answering over Notion Database\n\nð\x9f’¬ Chatbots\n\nDocumentation\n\nEnd-to-end Example: Chat-LangChain\n\nð\x9f¤\x96 Agents\n\nDocumentation\n\nEnd-to-end Example: GPT+WolframAlpha\n\nð\x9f“\x96 Documentation\n\nPlease see here for full documentation on:\n\nGetting started (installation, setting up the environment, simple examples)\n\nHow-To examples (demos,
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/markdown.html
f4b72502c1fe-1
(installation, setting up the environment, simple examples)\n\nHow-To examples (demos, integrations, helper functions)\n\nReference (full API docs)\n Resources (high-level explanation of core concepts)\n\nð\x9f\x9a\x80 What can this help with?\n\nThere are six main areas that LangChain is designed to help with.\nThese are, in increasing order of complexity:\n\nð\x9f“\x83 LLMs and Prompts:\n\nThis includes prompt management, prompt optimization, generic interface for all LLMs, and common utilities for working with LLMs.\n\nð\x9f”\x97 Chains:\n\nChains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\n\nð\x9f“\x9a Data Augmented Generation:\n\nData Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.\n\nð\x9f¤\x96 Agents:\n\nAgents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\n\nð\x9f§\xa0 Memory:\n\nMemory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\n\nð\x9f§\x90
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/markdown.html
f4b72502c1fe-2
and examples of chains/agents that use memory.\n\nð\x9f§\x90 Evaluation:\n\n[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\n\nFor more information on these concepts, please see our full documentation.\n\nð\x9f’\x81 Contributing\n\nAs an open source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infra, or better documentation.\n\nFor detailed information on how to contribute, see here.", lookup_str='', metadata={'source': '../../../../README.md'}, lookup_index=0)] Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredMarkdownLoader("../../../../README.md", mode="elements") data = loader.load() data[0] Document(page_content='ð\x9f¦\x9cï¸\x8fð\x9f”\x97 LangChain', lookup_str='', metadata={'source': '../../../../README.md', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0) previous IMSDb next Notebook Contents Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/markdown.html
319cd592303f-0
.ipynb .pdf Apify Dataset Contents Prerequisites An example with question answering Apify Dataset# This notebook shows how to load Apify datasets to LangChain. Apify Dataset is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors—serverless cloud programs for varius web scraping, crawling, and data extraction use cases. Prerequisites# You need to have an existing dataset on the Apify platform. If you don’t have one, please first check out this notebook on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs. First, import ApifyDatasetLoader into your source code: from langchain.document_loaders import ApifyDatasetLoader from langchain.document_loaders.base import Document Then provide a function that maps Apify dataset record fields to LangChain Document format. For example, if your dataset items are structured like this: { "url": "https://apify.com", "text": "Apify is the best web scraping and automation platform." } The mapping function in the code below will convert them to LangChain Document format, so that you can use them further with any LLM model (e.g. for question answering). loader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda dataset_item: Document( page_content=dataset_item["text"], metadata={"source": dataset_item["url"]} ), ) data = loader.load() An example with question answering# In this example, we use data from a dataset to answer a question. from langchain.docstore.document import Document from langchain.document_loaders import
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/apify_dataset.html
319cd592303f-1
answer a question. from langchain.docstore.document import Document from langchain.document_loaders import ApifyDatasetLoader from langchain.indexes import VectorstoreIndexCreator loader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ), ) index = VectorstoreIndexCreator().from_loaders([loader]) query = "What is Apify?" result = index.query_with_sources(query) print(result["answer"]) print(result["sources"]) Apify is a platform for developing, running, and sharing serverless cloud programs. It enables users to create web scraping and automation tools and publish them on the Apify platform. https://docs.apify.com/platform/actors, https://docs.apify.com/platform/actors/running/actors-in-store, https://docs.apify.com/platform/security, https://docs.apify.com/platform/actors/examples previous Airbyte JSON next AZLyrics Contents Prerequisites An example with question answering By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/apify_dataset.html
9dd9d1dcafaf-0
.ipynb .pdf Facebook Chat Facebook Chat# This notebook covers how to load data from the Facebook Chats into a format that can be ingested into LangChain. from langchain.document_loaders import FacebookChatLoader loader = FacebookChatLoader("example_data/facebook_chat.json") loader.load() [Document(page_content='User 2 on 2023-02-05 12:46:11: Bye!\n\nUser 1 on 2023-02-05 12:43:55: Oh no worries! Bye\n\nUser 2 on 2023-02-05 12:24:37: No Im sorry it was my mistake, the blue one is not for sale\n\nUser 1 on 2023-02-05 12:05:40: I thought you were selling the blue one!\n\nUser 1 on 2023-02-05 12:05:09: Im not interested in this bag. Im interested in the blue one!\n\nUser 2 on 2023-02-05 12:04:28: Here is $129\n\nUser 2 on 2023-02-05 12:04:05: Online is at least $100\n\nUser 1 on 2023-02-05 11:59:59: How much do you want?\n\nUser 2 on 2023-02-05 07:17:56: Goodmorning! $50 is too low.\n\nUser 1 on 2023-02-04 23:17:02: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!\n\n', lookup_str='', metadata={'source': 'docs/modules/document_loaders/examples/example_data/facebook_chat.json'}, lookup_index=0)] previous EverNote next Figma By Harrison
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/facebook_chat.html
9dd9d1dcafaf-1
lookup_index=0)] previous EverNote next Figma By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/facebook_chat.html
6a3ce3831499-0
.ipynb .pdf GitBook Contents Load from single GitBook page Load from all paths in a given GitBook GitBook# How to pull page data from any GitBook. from langchain.document_loaders import GitbookLoader loader = GitbookLoader("https://docs.gitbook.com") Load from single GitBook page# page_data = loader.load() page_data [Document(page_content='Introduction to GitBook\nGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.\nWe want to help \nteams to work more efficiently\n by creating a simple yet powerful platform for them to \nshare their knowledge\n.\nOur mission is to make a \nuser-friendly\n and \ncollaborative\n product for everyone to create, edit and share knowledge through documentation.\nPublish your documentation in 5 easy steps\nImport\n\nMove your existing content to GitBook with ease.\nGit Sync\n\nBenefit from our bi-directional synchronisation with GitHub and GitLab.\nOrganise your content\n\nCreate pages and spaces and organize them into collections\nCollaborate\n\nInvite other users and collaborate asynchronously with ease.\nPublish your docs\n\nShare your documentation with selected users or with everyone.\nNext\n - Getting started\nOverview\nLast modified \n3mo ago', lookup_str='', metadata={'source': 'https://docs.gitbook.com', 'title': 'Introduction to GitBook'}, lookup_index=0)] Load from all paths in a given GitBook# For this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have load_all_paths set to True. loader = GitbookLoader("https://docs.gitbook.com", load_all_paths=True) all_pages_data = loader.load() Fetching text from https://docs.gitbook.com/ Fetching text from
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/gitbook.html
6a3ce3831499-1
= loader.load() Fetching text from https://docs.gitbook.com/ Fetching text from https://docs.gitbook.com/getting-started/overview Fetching text from https://docs.gitbook.com/getting-started/import Fetching text from https://docs.gitbook.com/getting-started/git-sync Fetching text from https://docs.gitbook.com/getting-started/content-structure Fetching text from https://docs.gitbook.com/getting-started/collaboration Fetching text from https://docs.gitbook.com/getting-started/publishing Fetching text from https://docs.gitbook.com/tour/quick-find Fetching text from https://docs.gitbook.com/tour/editor Fetching text from https://docs.gitbook.com/tour/customization Fetching text from https://docs.gitbook.com/tour/member-management Fetching text from https://docs.gitbook.com/tour/pdf-export Fetching text from https://docs.gitbook.com/tour/activity-history Fetching text from https://docs.gitbook.com/tour/insights Fetching text from https://docs.gitbook.com/tour/notifications Fetching text from https://docs.gitbook.com/tour/internationalization Fetching text from https://docs.gitbook.com/tour/keyboard-shortcuts Fetching text from https://docs.gitbook.com/tour/seo Fetching text from https://docs.gitbook.com/advanced-guides/custom-domain Fetching text from https://docs.gitbook.com/advanced-guides/advanced-sharing-and-security Fetching text from https://docs.gitbook.com/advanced-guides/integrations Fetching text from https://docs.gitbook.com/billing-and-admin/account-settings Fetching text from https://docs.gitbook.com/billing-and-admin/plans Fetching text from https://docs.gitbook.com/troubleshooting/faqs Fetching text from https://docs.gitbook.com/troubleshooting/hard-refresh Fetching text from
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/gitbook.html
6a3ce3831499-2
text from https://docs.gitbook.com/troubleshooting/hard-refresh Fetching text from https://docs.gitbook.com/troubleshooting/report-bugs Fetching text from https://docs.gitbook.com/troubleshooting/connectivity-issues Fetching text from https://docs.gitbook.com/troubleshooting/support print(f"fetched {len(all_pages_data)} documents.") # show second document all_pages_data[2] fetched 28 documents. Document(page_content="Import\nFind out how to easily migrate your existing documentation and which formats are supported.\nThe import function allows you to migrate and unify existing documentation in GitBook. You can choose to import single or multiple pages although limits apply. \nPermissions\nAll members with editor permission or above can use the import feature.\nSupported formats\nGitBook supports imports from websites or files that are:\nMarkdown (.md or .markdown)\nHTML (.html)\nMicrosoft Word (.docx).\nWe also support import from:\nConfluence\nNotion\nGitHub Wiki\nQuip\nDropbox Paper\nGoogle Docs\nYou can also upload a ZIP\n \ncontaining HTML or Markdown files when \nimporting multiple pages.\nNote: this feature is in beta.\nFeel free to suggest import sources we don't support yet and \nlet us know\n if you have any issues.\nImport panel\nWhen you create a new space, you'll have the option to import content straight away:\nThe new page menu\nImport a page or subpage by selecting \nImport Page\n from the New Page menu, or \nImport Subpage\n in the page action menu, found in the table of contents:\nImport from the page action menu\nWhen you choose your input source, instructions will explain how to proceed.\nAlthough GitBook supports importing content from different kinds of sources, the end result might be different from your source due to differences in product features and document
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/gitbook.html
6a3ce3831499-3
of sources, the end result might be different from your source due to differences in product features and document format.\nLimits\nGitBook currently has the following limits for imported content:\nThe maximum number of pages that can be uploaded in a single import is \n20.\nThe maximum number of files (images etc.) that can be uploaded in a single import is \n20.\nGetting started - \nPrevious\nOverview\nNext\n - Getting started\nGit Sync\nLast modified \n4mo ago", lookup_str='', metadata={'source': 'https://docs.gitbook.com/getting-started/import', 'title': 'Import'}, lookup_index=0) previous Git next Google Drive Contents Load from single GitBook page Load from all paths in a given GitBook By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/gitbook.html
f048055a7161-0
.ipynb .pdf s3 Directory Contents Specifying a prefix s3 Directory# This covers how to load document objects from an s3 directory object. from langchain.document_loaders import S3DirectoryLoader #!pip install boto3 loader = S3DirectoryLoader("testing-hwc") loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)] Specifying a prefix# You can also specify a prefix for more finegrained control over what files to load. loader = S3DirectoryLoader("testing-hwc", prefix="fake") loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)] previous Roam next s3 File Contents Specifying a prefix By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/s3_directory.html
cceec8edb0f5-0
.ipynb .pdf Bilibili Bilibili# This loader utilizes the bilibili-api to fetch the text transcript from Bilibili, one of the most beloved long-form video sites in China. With this BiliBiliLoader, users can easily obtain the transcript of their desired video content on the platform. from langchain.document_loaders.bilibili import BiliBiliLoader #!pip install bilibili-api loader = BiliBiliLoader( ["https://www.bilibili.com/video/BV1xt411o7Xu/"] ) loader.load() previous BigQuery Loader next Blackboard By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/bilibili.html
bcccdfd2eed2-0
.ipynb .pdf Unstructured File Loader Contents Retain Elements Define a Partitioning Strategy PDF Example Unstructured File Loader# This notebook covers how to use Unstructured to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more. # # Install package !pip install "unstructured[local-inference]" !pip install "detectron2@git+https://github.com/facebookresearch/[email protected]#egg=detectron2" !pip install layoutparser[layoutmodels,tesseract] # # Install other dependencies # # https://github.com/Unstructured-IO/unstructured/blob/main/docs/source/installing.rst # !brew install libmagic # !brew install poppler # !brew install tesseract # # If parsing xml / html documents: # !brew install libxml2 # !brew install libxslt # import nltk # nltk.download('punkt') from langchain.document_loaders import UnstructuredFileLoader loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt") docs = loader.load() docs[0].page_content[:400] 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\n\nLast year COVID-19 kept us apart. This year we are finally together again.\n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.\n\nWith a duty to one another to the American people to the Constit' Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt",
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
bcccdfd2eed2-1
specifying mode="elements". loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt", mode="elements") docs = loader.load() docs[:5] [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Last year COVID-19 kept us apart. This year we are finally together again.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='With a duty to one another to the American people to the Constitution.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='And with an unwavering resolve that freedom will always triumph over tyranny.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)] Define a Partitioning Strategy# Unstructured document loader allow users to pass in a strategy parameter that lets unstructured know how to partitioning the document. Currently supported strategies are "hi_res" (the default) and "fast". Hi res partitioning strategies are more accurate, but take longer to process. Fast strategies partition the document more quickly, but trade-off accuracy. Not all document types have separate hi res and fast partitioning strategies. For those document types, the strategy kwarg is ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing (i.e. a model for document partitioning). You can see how to apply a strategy to an UnstructuredFileLoader below. from langchain.document_loaders import
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
bcccdfd2eed2-2
how to apply a strategy to an UnstructuredFileLoader below. from langchain.document_loaders import UnstructuredFileLoader loader = UnstructuredFileLoader("layout-parser-paper-fast.pdf", strategy="fast", mode="elements") docs = loader.load() docs[:5] [Document(page_content='1', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='0', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'Title'}, lookup_index=0)] PDF Example# Processing PDF documents works exactly the same way. Unstructured detects the file type and extracts the same types of elements. !wget https://raw.githubusercontent.com/Unstructured-IO/unstructured/main/example-docs/layout-parser-paper.pdf -P "../../" loader = UnstructuredFileLoader("./example_data/layout-parser-paper.pdf", mode="elements") docs = loader.load() docs[:5] [Document(page_content='LayoutParser : A Unified Toolkit for Deep
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
bcccdfd2eed2-3
: A Unified Toolkit for Deep Learning Based Document Image Analysis', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Zejiang Shen 1 ( (ea)\n ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and Weining Li 5', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Allen Institute for AI [email protected]', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Brown University ruochen [email protected]', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Harvard University { melissadell,jacob carlson } @fas.harvard.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0)] previous Telegram next URL Contents Retain Elements Define a Partitioning Strategy PDF Example By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
c64a14370544-0
.ipynb .pdf College Confidential College Confidential# This covers how to load College Confidential webpages into a document format that we can use downstream. from langchain.document_loaders import CollegeConfidentialLoader loader = CollegeConfidentialLoader("https://www.collegeconfidential.com/colleges/brown-university/") data = loader.load() data [Document(page_content='\n\n\n\n\n\n\n\nA68FEB02-9D19-447C-B8BC-818149FD6EAF\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Media (2)\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbout Brown\n\n\n\n\n\n\nBrown University Overview\nBrown University is a private, nonprofit school in the urban setting of Providence, Rhode Island. Brown was founded in 1764 and the school currently enrolls around 10,696 students a year, including 7,349 undergraduates. Brown provides on-campus housing for students. Most students live in off campus housing.\n📆 Mark your calendar! January 5, 2023 is the final deadline to submit an application for the Fall 2023 semester. \nThere are many ways for students to get involved at Brown! \nLove music or performing? Join a campus band, sing in a
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/college_confidential.html
c64a14370544-1
get involved at Brown! \nLove music or performing? Join a campus band, sing in a chorus, or perform with one of the school\'s theater groups.\nInterested in journalism or communications? Brown students can write for the campus newspaper, host a radio show or be a producer for the student-run television channel.\nInterested in joining a fraternity or sorority? Brown has fraternities and sororities.\nPlanning to play sports? Brown has many options for athletes. See them all and learn more about life at Brown on the Student Life page.\n\n\n\n2022 Brown Facts At-A-Glance\n\n\n\n\n\nAcademic Calendar\nOther\n\n\nOverall Acceptance Rate\n6%\n\n\nEarly Decision Acceptance Rate\n16%\n\n\nEarly Action Acceptance Rate\nEA not offered\n\n\nApplicants Submitting SAT scores\n51%\n\n\nTuition\n$62,680\n\n\nPercent of Need Met\n100%\n\n\nAverage First-Year Financial Aid Package\n$59,749\n\n\n\n\nIs Brown a Good School?\n\nDifferent people have different ideas about what makes a "good" school. Some factors that can help you determine what a good school for you might be include admissions criteria, acceptance rate, tuition costs, and more.\nLet\'s take a look at these factors to get a clearer sense of what Brown offers and if it could be the right college for you.\nBrown Acceptance Rate 2022\nIt is extremely difficult to get into Brown. Around 6% of applicants get into Brown each year. In 2022, just 2,568 out of the 46,568 students who applied were accepted.\nRetention and Graduation Rates at Brown\nRetention refers to the number of students that stay enrolled at a school over time. This is a way to get a sense of how satisfied students are with their school experience, and
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/college_confidential.html
c64a14370544-2
This is a way to get a sense of how satisfied students are with their school experience, and if they have the support necessary to succeed in college. \nApproximately 98% of first-year, full-time undergrads who start at Browncome back their sophomore year. 95% of Brown undergrads graduate within six years. The average six-year graduation rate for U.S. colleges and universities is 61% for public schools, and 67% for private, non-profit schools.\nJob Outcomes for Brown Grads\nJob placement stats are a good resource for understanding the value of a degree from Brown by providing a look on how job placement has gone for other grads. \nCheck with Brown directly, for information on any information on starting salaries for recent grads.\nBrown\'s Endowment\nAn endowment is the total value of a school\'s investments, donations, and assets. Endowment is not necessarily an indicator of the quality of a school, but it can give you a sense of how much money a college can afford to invest in expanding programs, improving facilities, and support students. \nAs of 2022, the total market value of Brown University\'s endowment was $4.7 billion. The average college endowment was $905 million in 2021. The school spends $34,086 for each full-time student enrolled. \nTuition and Financial Aid at Brown\nTuition is another important factor when choose a college. Some colleges may have high tuition, but do a better job at meeting students\' financial need.\nBrown meets 100% of the demonstrated financial need for undergraduates. The average financial aid package for a full-time, first-year student is around $59,749 a year. \nThe average student debt for graduates in the class of 2022 was around $24,102 per student, not including those with no debt. For context, compare this number with the average national debt,
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/college_confidential.html
c64a14370544-3
not including those with no debt. For context, compare this number with the average national debt, which is around $36,000 per borrower. \nThe 2023-2024 FAFSA Opened on October 1st, 2022\nSome financial aid is awarded on a first-come, first-served basis, so fill out the FAFSA as soon as you can. Visit the FAFSA website to apply for student aid. Remember, the first F in FAFSA stands for FREE! You should never have to pay to submit the Free Application for Federal Student Aid (FAFSA), so be very wary of anyone asking you for money.\nLearn more about Tuition and Financial Aid at Brown.\nBased on this information, does Brown seem like a good fit? Remember, a school that is perfect for one person may be a terrible fit for someone else! So ask yourself: Is Brown a good school for you?\nIf Brown University seems like a school you want to apply to, click the heart button to save it to your college list.\n\nStill Exploring Schools?\nChoose one of the options below to learn more about Brown:\nAdmissions\nStudent Life\nAcademics\nTuition & Aid\nBrown Community Forums\nThen use the college admissions predictor to take a data science look at your chances of getting into some of the best colleges and universities in the U.S.\nWhere is Brown?\nBrown is located in the urban setting of Providence, Rhode Island, less than an hour from Boston. \nIf you would like to see Brown for yourself, plan a visit. The best way to reach campus is to take Interstate 95 to Providence, or book a flight to the nearest airport, T.F. Green.\nYou can also take a virtual campus tour to get a sense of what Brown and Providence are like without leaving home.\nConsidering Going to School in Rhode Island?\nSee a full list
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/college_confidential.html
c64a14370544-4
are like without leaving home.\nConsidering Going to School in Rhode Island?\nSee a full list of colleges in Rhode Island and save your favorites to your college list.\n\n\n\nCollege Info\n\n\n\n\n\n\n\n\n\n Providence, RI 02912\n \n\n\n\n Campus Setting: Urban\n \n\n\n\n\n\n\n\n (401) 863-2378\n \n\n Website\n \n\n Virtual Tour\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBrown Application Deadline\n\n\n\nFirst-Year Applications are Due\n\nJan 5\n\nTransfer Applications are Due\n\nMar 1\n\n\n\n \n The deadline for Fall first-year applications to Brown is \n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/college_confidential.html
c64a14370544-5
deadline for Fall first-year applications to Brown is \n Jan 5. \n \n \n \n\n \n The deadline for Fall transfer applications to Brown is \n Mar 1. \n \n \n \n\n \n Check the school website \n for more information about deadlines for specific programs or special admissions programs\n \n \n\n\n\n\n\n\nBrown ACT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nACT Range\n\n\n \n 33 - 35\n \n \n\n\n\nEstimated Chance of Acceptance by ACT Score\n\n\nACT Score\nEstimated Chance\n\n\n35 and Above\nGood\n\n\n33 to
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/college_confidential.html
c64a14370544-6
Score\nEstimated Chance\n\n\n35 and Above\nGood\n\n\n33 to 35\nAvg\n\n\n33 and Less\nLow\n\n\n\n\n\n\nStand out on your college application\n\n• Qualify for scholarships\n• Most students who retest improve their score\n\nSponsored by ACT\n\n\n Take the Next ACT Test\n \n\n\n\n\n\nBrown SAT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nComposite SAT Range\n\n\n \n 720 - 770\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nMath SAT Range\n\n\n \n Not available\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nReading SAT Range\n\n\n \n 740 - 800\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/college_confidential.html
c64a14370544-7
740 - 800\n \n \n\n\n\n\n\n\n Brown Tuition & Fees\n \n\n\n\nTuition & Fees\n\n\n\n $82,286\n \nIn State\n\n\n\n\n $82,286\n \nOut-of-State\n\n\n\n\n\n\n\nCost Breakdown\n\n\nIn State\n\n\nOut-of-State\n\n\n\n\nState Tuition\n\n\n\n $62,680\n \n\n\n\n $62,680\n \n\n\n\n\nFees\n\n\n\n $2,466\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/college_confidential.html
c64a14370544-8
\n\n\n\n $2,466\n \n\n\n\n\nHousing\n\n\n\n $15,840\n \n\n\n\n $15,840\n \n\n\n\n\nBooks\n\n\n\n $1,300\n \n\n\n\n $1,300\n \n\n\n\n\n\n Total (Before Financial Aid):\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/college_confidential.html
c64a14370544-9
\n\n\n\n $82,286\n \n\n\n\n $82,286\n \n\n\n\n\n\n\n\n\n\n\n\nStudent Life\n\n Wondering what life at Brown is like? There are approximately \n 10,696 students enrolled at \n Brown, \n including 7,349 undergraduate students and \n 3,347 graduate students.\n 96% percent of students attend school \n full-time, \n 6% percent are from RI and \n 94% percent of students are from other states.\n \n\n\n\n\n\n None\n \n\n\n\n\nUndergraduate Enrollment\n\n\n\n
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/college_confidential.html
c64a14370544-10
96%\n \nFull Time\n\n\n\n\n 4%\n \nPart Time\n\n\n\n\n\n\n\n 94%\n \n\n\n\n\nResidency\n\n\n\n 6%\n \nIn State\n\n\n\n\n 94%\n \nOut-of-State\n\n\n\n\n\n\n\n Data Source: IPEDs and Peterson\'s Databases © 2022 Peterson\'s LLC All rights reserved\n \n', lookup_str='', metadata={'source': 'https://www.collegeconfidential.com/colleges/brown-university/'}, lookup_index=0)] previous Blackboard next Copy Paste By Harrison Chase © Copyright 2023, Harrison Chase.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/college_confidential.html
c64a14370544-11
© Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/college_confidential.html
e286fe4c2754-0
.ipynb .pdf WhatsApp Chat WhatsApp Chat# This notebook covers how to load data from the WhatsApp Chats into a format that can be ingested into LangChain. from langchain.document_loaders import WhatsAppChatLoader loader = WhatsAppChatLoader("example_data/whatsapp_chat.txt") loader.load() previous Web Base next Word Documents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/whatsapp_chat.html
3d0c51dada59-0
.ipynb .pdf CSV Loader Contents Customizing the csv parsing and loading Specify a column to be used identify the document source CSV Loader# Load csv files with a single row per document. from langchain.document_loaders.csv_loader import CSVLoader loader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv') data = loader.load() print(data) [Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv',
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/csv.html
3d0c51dada59-1
lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source':
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/csv.html
3d0c51dada59-2
95.14\n"Wins": 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins": 75',
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/csv.html
3d0c51dada59-3
Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)":
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/csv.html
3d0c51dada59-4
lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0)] Customizing the csv parsing and loading# See the csv module documentation for more information of what csv args are supported. loader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv', csv_args={ 'delimiter': ',', 'quotechar': '"', 'fieldnames': ['MLB Team', 'Payroll in millions', 'Wins'] }) data = loader.load() print(data) [Document(page_content='MLB Team: Team\nPayroll in millions: "Payroll (millions)"\nWins: "Wins"', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='MLB Team: Nationals\nPayroll in millions: 81.34\nWins: 98',
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/csv.html
3d0c51dada59-5
Team: Nationals\nPayroll in millions: 81.34\nWins: 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='MLB Team: Reds\nPayroll in millions: 82.20\nWins: 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='MLB Team: Yankees\nPayroll in millions: 197.96\nWins: 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='MLB Team: Giants\nPayroll in millions: 117.62\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='MLB Team: Braves\nPayroll in millions: 83.31\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='MLB Team: Athletics\nPayroll in millions: 55.37\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='MLB Team: Rangers\nPayroll in millions: 120.51\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='MLB Team: Orioles\nPayroll in millions: 81.43\nWins: 93', lookup_str='',
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/csv.html
3d0c51dada59-6
Orioles\nPayroll in millions: 81.43\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='MLB Team: Rays\nPayroll in millions: 64.17\nWins: 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='MLB Team: Angels\nPayroll in millions: 154.49\nWins: 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='MLB Team: Tigers\nPayroll in millions: 132.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='MLB Team: Cardinals\nPayroll in millions: 110.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='MLB Team: Dodgers\nPayroll in millions: 95.14\nWins: 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='MLB Team: White Sox\nPayroll in millions: 96.92\nWins: 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='MLB Team: Brewers\nPayroll in millions: 97.65\nWins: 83', lookup_str='',
https:///langchain-cn.readthedocs.io/en/latest/modules/indexes/document_loaders/examples/csv.html