Skip to content

Commit 36c59e0

Browse files
authored
Arxiv document loader (langchain-ai#3627)
It makes sense to use `arxiv` as another source of the documents for downloading. - Added the `arxiv` document_loader, based on the `utilities/arxiv.py:ArxivAPIWrapper` - added tests - added an example notebook - sorted `__all__` in `__init__.py` (otherwise it is hard to find a class in the very long list)
1 parent 539142f commit 36c59e0

File tree

7 files changed

+462
-70
lines changed

7 files changed

+462
-70
lines changed
Lines changed: 177 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,177 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "bda1f3f5",
6+
"metadata": {},
7+
"source": [
8+
"# Arxiv\n",
9+
"\n",
10+
"[arXiv](https://arxiv.org/) is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.\n",
11+
"\n",
12+
"This notebook shows how to load scientific articles from `Arxiv.org` into a document format that we can use downstream."
13+
]
14+
},
15+
{
16+
"cell_type": "markdown",
17+
"id": "1b7a1eef-7bf7-4e7d-8bfc-c4e27c9488cb",
18+
"metadata": {},
19+
"source": [
20+
"## Installation"
21+
]
22+
},
23+
{
24+
"cell_type": "markdown",
25+
"id": "2abd5578-aa3d-46b9-99af-8b262f0b3df8",
26+
"metadata": {},
27+
"source": [
28+
"First, you need to install `arxiv` python package."
29+
]
30+
},
31+
{
32+
"cell_type": "code",
33+
"execution_count": null,
34+
"id": "b674aaea-ed3a-4541-8414-260a8f67f623",
35+
"metadata": {
36+
"tags": []
37+
},
38+
"outputs": [],
39+
"source": [
40+
"!pip install arxiv"
41+
]
42+
},
43+
{
44+
"cell_type": "markdown",
45+
"id": "094b5f13-7e54-4354-9d83-26d6926ecaa0",
46+
"metadata": {
47+
"tags": []
48+
},
49+
"source": [
50+
"Second, you need to install `PyMuPDF` python package which transform PDF files from the `arxiv.org` site into the text fromat."
51+
]
52+
},
53+
{
54+
"cell_type": "code",
55+
"execution_count": null,
56+
"id": "7cd91121-2e96-43ba-af50-319853695f86",
57+
"metadata": {
58+
"tags": []
59+
},
60+
"outputs": [],
61+
"source": [
62+
"!pip install pymupdf"
63+
]
64+
},
65+
{
66+
"cell_type": "markdown",
67+
"id": "95f05e1c-195e-4e2b-ae8e-8d6637f15be6",
68+
"metadata": {},
69+
"source": [
70+
"## Examples"
71+
]
72+
},
73+
{
74+
"cell_type": "markdown",
75+
"id": "e29b954c-1407-4797-ae21-6ba8937156be",
76+
"metadata": {},
77+
"source": [
78+
"`ArxivLoader` has these arguments:\n",
79+
"- `query`: free text which used to find documents in the Arxiv\n",
80+
"- optional `load_max_docs`: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments.\n",
81+
"- optional `load_all_available_meta`: default=False. By defaul only the most important fields downloaded: `Published` (date when document was published/last updated), `Title`, `Authors`, `Summary`. If True, other fields also downloaded."
82+
]
83+
},
84+
{
85+
"cell_type": "code",
86+
"execution_count": null,
87+
"id": "9bfd5e46",
88+
"metadata": {},
89+
"outputs": [],
90+
"source": [
91+
"from langchain.document_loaders.base import Document\n",
92+
"from langchain.document_loaders import ArxivLoader"
93+
]
94+
},
95+
{
96+
"cell_type": "code",
97+
"execution_count": null,
98+
"id": "700e4ef2",
99+
"metadata": {},
100+
"outputs": [],
101+
"source": [
102+
"docs = ArxivLoader(query=\"1605.08386\", load_max_docs=2).load()\n",
103+
"len(docs)"
104+
]
105+
},
106+
{
107+
"cell_type": "code",
108+
"execution_count": 2,
109+
"id": "8977bac0-0042-4f23-9754-247dbd32439b",
110+
"metadata": {
111+
"tags": []
112+
},
113+
"outputs": [
114+
{
115+
"data": {
116+
"text/plain": [
117+
"{'Published': '2016-05-26',\n",
118+
" 'Title': 'Heat-bath random walks with Markov bases',\n",
119+
" 'Authors': 'Caprice Stanley, Tobias Windisch',\n",
120+
" 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on\\nfibers of a fixed integer matrix can be bounded from above by a constant. We\\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\\nalso state explicit conditions on the set of moves so that the heat-bath random\\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\\ndimension.'}"
121+
]
122+
},
123+
"execution_count": 2,
124+
"metadata": {},
125+
"output_type": "execute_result"
126+
}
127+
],
128+
"source": [
129+
"doc[0].metadata # meta-information of the Document"
130+
]
131+
},
132+
{
133+
"cell_type": "code",
134+
"execution_count": 5,
135+
"id": "46969806-45a9-4c4d-a61b-cfb9658fc9de",
136+
"metadata": {
137+
"tags": []
138+
},
139+
"outputs": [
140+
{
141+
"data": {
142+
"text/plain": [
143+
"'arXiv:1605.08386v1 [math.CO] 26 May 2016\\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\\nCAPRICE STANLEY AND TOBIAS WINDISCH\\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\\nbehaviour of heat-b'"
144+
]
145+
},
146+
"execution_count": 5,
147+
"metadata": {},
148+
"output_type": "execute_result"
149+
}
150+
],
151+
"source": [
152+
"doc[0].page_content[:400] # all pages of the Document content\n"
153+
]
154+
}
155+
],
156+
"metadata": {
157+
"kernelspec": {
158+
"display_name": "Python 3 (ipykernel)",
159+
"language": "python",
160+
"name": "python3"
161+
},
162+
"language_info": {
163+
"codemirror_mode": {
164+
"name": "ipython",
165+
"version": 3
166+
},
167+
"file_extension": ".py",
168+
"mimetype": "text/x-python",
169+
"name": "python",
170+
"nbconvert_exporter": "python",
171+
"pygments_lexer": "ipython3",
172+
"version": "3.10.6"
173+
}
174+
},
175+
"nbformat": 4,
176+
"nbformat_minor": 5
177+
}

langchain/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@
4747
PromptTemplate,
4848
)
4949
from langchain.sql_database import SQLDatabase
50-
from langchain.utilities import ArxivAPIWrapper
50+
from langchain.utilities.arxiv import ArxivAPIWrapper
5151
from langchain.utilities.google_search import GoogleSearchAPIWrapper
5252
from langchain.utilities.google_serper import GoogleSerperAPIWrapper
5353
from langchain.utilities.powerbi import PowerBIDataset

langchain/document_loaders/__init__.py

Lines changed: 64 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22

33
from langchain.document_loaders.airbyte_json import AirbyteJSONLoader
44
from langchain.document_loaders.apify_dataset import ApifyDatasetLoader
5+
from langchain.document_loaders.arxiv import ArxivLoader
56
from langchain.document_loaders.azlyrics import AZLyricsLoader
67
from langchain.document_loaders.azure_blob_storage_container import (
78
AzureBlobStorageContainerLoader,
@@ -90,78 +91,79 @@
9091
PagedPDFSplitter = PyPDFLoader
9192

9293
__all__ = [
93-
"UnstructuredFileLoader",
94-
"UnstructuredFileIOLoader",
95-
"UnstructuredURLLoader",
96-
"SeleniumURLLoader",
97-
"PlaywrightURLLoader",
94+
"AZLyricsLoader",
95+
"AirbyteJSONLoader",
96+
"ApifyDatasetLoader",
97+
"ArxivLoader",
98+
"AzureBlobStorageContainerLoader",
99+
"AzureBlobStorageFileLoader",
100+
"BSHTMLLoader",
101+
"BigQueryLoader",
102+
"BiliBiliLoader",
103+
"BlackboardLoader",
104+
"BlockchainDocumentLoader",
105+
"CSVLoader",
106+
"ChatGPTLoader",
107+
"CoNLLULoader",
108+
"CollegeConfidentialLoader",
109+
"ConfluenceLoader",
110+
"DataFrameLoader",
111+
"DiffbotLoader",
98112
"DirectoryLoader",
99-
"NotionDirectoryLoader",
100-
"NotionDBLoader",
101-
"ReadTheDocsLoader",
113+
"DiscordChatLoader",
114+
"DuckDBLoader",
115+
"EverNoteLoader",
116+
"FacebookChatLoader",
117+
"GCSDirectoryLoader",
118+
"GCSFileLoader",
119+
"GitLoader",
120+
"GitbookLoader",
121+
"GoogleApiClient",
122+
"GoogleApiYoutubeLoader",
102123
"GoogleDriveLoader",
103-
"UnstructuredHTMLLoader",
104-
"BSHTMLLoader",
105-
"UnstructuredPowerPointLoader",
106-
"UnstructuredWordDocumentLoader",
107-
"UnstructuredPDFLoader",
108-
"UnstructuredImageLoader",
109-
"ObsidianLoader",
110-
"UnstructuredEmailLoader",
111-
"OutlookMessageLoader",
112-
"UnstructuredEPubLoader",
113-
"UnstructuredMarkdownLoader",
114-
"UnstructuredRTFLoader",
115-
"RoamLoader",
116-
"YoutubeLoader",
117-
"S3FileLoader",
118-
"TextLoader",
124+
"GutenbergLoader",
119125
"HNLoader",
120-
"GitbookLoader",
121-
"S3DirectoryLoader",
122-
"GCSFileLoader",
123-
"GCSDirectoryLoader",
124-
"WebBaseLoader",
125-
"IMSDbLoader",
126-
"AZLyricsLoader",
127-
"CollegeConfidentialLoader",
126+
"HuggingFaceDatasetLoader",
128127
"IFixitLoader",
129-
"GutenbergLoader",
130-
"PagedPDFSplitter",
131-
"PyPDFLoader",
132-
"EverNoteLoader",
133-
"AirbyteJSONLoader",
128+
"IMSDbLoader",
129+
"ImageCaptionLoader",
130+
"NotebookLoader",
131+
"NotionDBLoader",
132+
"NotionDirectoryLoader",
133+
"ObsidianLoader",
134134
"OnlinePDFLoader",
135+
"OutlookMessageLoader",
135136
"PDFMinerLoader",
136137
"PDFMinerPDFasHTMLLoader",
138+
"PagedPDFSplitter",
139+
"PlaywrightURLLoader",
137140
"PyMuPDFLoader",
138-
"TelegramChatLoader",
141+
"PyPDFLoader",
142+
"PythonLoader",
143+
"ReadTheDocsLoader",
144+
"RoamLoader",
145+
"S3DirectoryLoader",
146+
"S3FileLoader",
139147
"SRTLoader",
140-
"FacebookChatLoader",
141-
"NotebookLoader",
142-
"CoNLLULoader",
143-
"GoogleApiYoutubeLoader",
144-
"GoogleApiClient",
145-
"CSVLoader",
146-
"BlackboardLoader",
147-
"ApifyDatasetLoader",
148-
"WhatsAppChatLoader",
149-
"DataFrameLoader",
150-
"AzureBlobStorageFileLoader",
151-
"AzureBlobStorageContainerLoader",
148+
"SeleniumURLLoader",
152149
"SitemapLoader",
153-
"DuckDBLoader",
154-
"BigQueryLoader",
155-
"DiffbotLoader",
156-
"BiliBiliLoader",
157150
"SlackDirectoryLoader",
158-
"GitLoader",
151+
"TelegramChatLoader",
152+
"TextLoader",
159153
"TwitterTweetLoader",
160-
"ImageCaptionLoader",
161-
"DiscordChatLoader",
162-
"ConfluenceLoader",
163-
"PythonLoader",
164-
"ChatGPTLoader",
165-
"HuggingFaceDatasetLoader",
166-
"BlockchainDocumentLoader",
154+
"UnstructuredEPubLoader",
155+
"UnstructuredEmailLoader",
156+
"UnstructuredFileIOLoader",
157+
"UnstructuredFileLoader",
158+
"UnstructuredHTMLLoader",
159+
"UnstructuredImageLoader",
160+
"UnstructuredMarkdownLoader",
161+
"UnstructuredPDFLoader",
162+
"UnstructuredPowerPointLoader",
163+
"UnstructuredRTFLoader",
164+
"UnstructuredURLLoader",
165+
"UnstructuredWordDocumentLoader",
166+
"WebBaseLoader",
167+
"WhatsAppChatLoader",
168+
"YoutubeLoader",
167169
]
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
from typing import List, Optional
2+
3+
from langchain.docstore.document import Document
4+
from langchain.document_loaders.base import BaseLoader
5+
from langchain.utilities.arxiv import ArxivAPIWrapper
6+
7+
8+
class ArxivLoader(BaseLoader):
9+
"""Loads a query result from arxiv.org into a list of Documents.
10+
11+
Each document represents one Document.
12+
The loader converts the original PDF format into the text.
13+
"""
14+
15+
def __init__(
16+
self,
17+
query: str,
18+
load_max_docs: Optional[int] = 100,
19+
load_all_available_meta: Optional[bool] = False,
20+
):
21+
self.query = query
22+
self.load_max_docs = load_max_docs
23+
self.load_all_available_meta = load_all_available_meta
24+
25+
def load(self) -> List[Document]:
26+
arxiv_client = ArxivAPIWrapper(
27+
load_max_docs=self.load_max_docs,
28+
load_all_available_meta=self.load_all_available_meta,
29+
)
30+
docs = arxiv_client.load(self.query)
31+
return docs

0 commit comments

Comments
 (0)