PDFPlumberLoader
This notebook provides a quick overview for getting started with PDFMiner
document loader. For detailed documentation of all __ModuleName__Loader features and configurations head to the API reference.
Overview
Integration details
Class | Package | Local | Serializable | JS support |
---|---|---|---|---|
PDFPlumberLoader | langchain_community | ✅ | ❌ | ❌ |
Loader features
Source | Document Lazy Loading | Native Async Support | Extract Images | Extract Tables |
---|---|---|---|---|
PDFPlumberLoader | ✅ | ❌ | ✅ | ✅ |
Setup
Credentials
No credentials are required to use PyMuPDFLoader
If you want to get automated best in-class tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"
Installation
Install langchain_community and pymupdf.
%pip install -qU langchain_community pdfplumber
Note: you may need to restart the kernel to use updated packages.
Initialization
Now we can instantiate our model object and load documents:
from langchain_community.document_loaders import PDFPlumberLoader
file_path = "./example_data/layout-parser-paper.pdf"
loader = PDFPlumberLoader(file_path)
Load
docs = loader.load()
docs[0]
Document(metadata={'author': '', 'creationdate': '2021-06-22T01:27:10+00:00', 'creator': 'LaTeX with hyperref', 'keywords': '', 'moddate': '2021-06-22T01:27:10+00:00', 'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live 2020) kpathsea version 6.3.2', 'producer': 'pdfTeX-1.40.21', 'subject': '', 'title': '', 'trapped': 'False', 'source': './example_data/layout-parser-paper.pdf', 'file_path': './example_data/layout-parser-paper.pdf', 'total_pages': 16, 'page': 0}, page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\nshannons@allenai.org\n2 Brown University\nruochen zhang@brown.edu\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\nbcgl@cs.washington.edu\n5 University of Waterloo\nw422li@uwaterloo.ca\nAbstract. Recentadvancesindocumentimageanalysis(DIA)havebeen\nprimarily driven by the application of neural networks. Ideally, research\noutcomescouldbeeasilydeployedinproductionandextendedforfurther\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportantinnovationsbyawideaudience.Thoughtherehavebeenon-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopmentindisciplineslikenaturallanguageprocessingandcomputer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademicresearchacross awiderangeof disciplinesinthesocialsciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitiveinterfacesforapplyingandcustomizingDLmodelsforlayoutde-\ntection,characterrecognition,andmanyotherdocumentprocessingtasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: DocumentImageAnalysis·DeepLearning·LayoutAnalysis\n· Character Recognition · Open Source library · Toolkit.\n1 Introduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocumentimageanalysis(DIA)tasksincludingdocumentimageclassification[11,\n1202 nuJ 12 ]VC.sc[ 2v84351.3012:viXra\n')
import pprint
pprint.pp(docs[0].metadata)
{'author': '',
'creationdate': '2021-06-22T01:27:10+00:00',
'creator': 'LaTeX with hyperref',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'producer': 'pdfTeX-1.40.21',
'subject': '',
'title': '',
'trapped': 'False',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'page': 0}
Lazy Load
pages = []
for doc in loader.lazy_load():
pages.append(doc)
if len(pages) >= 10:
# do some paged operation, e.g.
# index.upsert(page)
pages = []
len(pages)
6
print(pages[0].page_content[:100])
pprint.pp(pages[0].metadata)
LayoutParser: A Unified Toolkit for DL-Based DIA 11
focuses on precision, efficiency, and robustness
{'author': '',
'creationdate': '2021-06-22T01:27:10+00:00',
'creator': 'LaTeX with hyperref',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'producer': 'pdfTeX-1.40.21',
'subject': '',
'title': '',
'trapped': 'False',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'page': 10}
The metadata attribute contains at least the following keys:
- source
- page (if in mode page)
- total_page
- creationdate
- creator
- producer
Additional metadata are specific to each parser. These pieces of information can be helpful (to categorize your PDFs for example).
Splitting mode & custom pages delimiter
When loading the PDF file you can split it in two different ways:
- By page
- As a single text flow
By default PDFPlumberLoader will split the PDF by page.
Extract the PDF by page. Each page is extracted as a langchain Document object:
loader = PDFPlumberLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
16
{'author': '',
'creationdate': '2021-06-22T01:27:10+00:00',
'creator': 'LaTeX with hyperref',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'producer': 'pdfTeX-1.40.21',
'subject': '',
'title': '',
'trapped': 'False',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'page': 0}
In this mode the pdf is split by pages and the resulting Documents metadata contains the page number. But in some cases we could want to process the pdf as a single text flow (so we don't cut some paragraphs in half). In this case you can use the single mode :
Extract the whole PDF as a single langchain Document object:
loader = PDFPlumberLoader(
"./example_data/layout-parser-paper.pdf",
mode="single",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
1
{'author': '',
'creationdate': '2021-06-22T01:27:10+00:00',
'creator': 'LaTeX with hyperref',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'producer': 'pdfTeX-1.40.21',
'subject': '',
'title': '',
'trapped': 'False',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16}
Logically, in this mode, the ‘page_number’ metadata disappears. Here's how to clearly identify where pages end in the text flow :
Add a custom pages_delimiter to identify where are ends of pages in single mode:
loader = PDFPlumberLoader(
"./example_data/layout-parser-paper.pdf",
mode="single",
pages_delimiter="\n-------THIS IS A CUSTOM END OF PAGE-------\n",
)
docs = loader.load()
print(docs[0].page_content[:5780])
LayoutParser: A Unified Toolkit for Deep
Learning Based Document Image Analysis
Zejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain
Lee4, Jacob Carlson3, and Weining Li5
1 Allen Institute for AI
shannons@allenai.org
2 Brown University
ruochen zhang@brown.edu
3 Harvard University
{melissadell,jacob carlson}@fas.harvard.edu
4 University of Washington
bcgl@cs.washington.edu
5 University of Waterloo
w422li@uwaterloo.ca
Abstract. Recentadvancesindocumentimageanalysis(DIA)havebeen
primarily driven by the application of neural networks. Ideally, research
outcomescouldbeeasilydeployedinproductionandextendedforfurther
investigation. However, various factors like loosely organized codebases
and sophisticated model configurations complicate the easy reuse of im-
portantinnovationsbyawideaudience.Thoughtherehavebeenon-going
efforts to improve reusability and simplify deep learning (DL) model
developmentindisciplineslikenaturallanguageprocessingandcomputer
vision, none of them are optimized for challenges in the domain of DIA.
This represents a major gap in the existing toolkit, as DIA is central to
academicresearchacross awiderangeof disciplinesinthesocialsciences
and humanities. This paper introduces LayoutParser, an open-source
library for streamlining the usage of DL in DIA research and applica-
tions. The core LayoutParser library comes with a set of simple and
intuitiveinterfacesforapplyingandcustomizingDLmodelsforlayoutde-
tection,characterrecognition,andmanyotherdocumentprocessingtasks.
To promote extensibility, LayoutParser also incorporates a community
platform for sharing both pre-trained models and full document digiti-
zation pipelines. We demonstrate that LayoutParser is helpful for both
lightweight and large-scale digitization pipelines in real-word use cases.
The library is publicly available at https://layout-parser.github.io.
Keywords: DocumentImageAnalysis·DeepLearning·LayoutAnalysis
· Character Recognition · Open Source library · Toolkit.
1 Introduction
Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of
documentimageanalysis(DIA)tasksincludingdocumentimageclassification[11,
1202 nuJ 12 ]VC.sc[ 2v84351.3012:viXra
-------THIS IS A CUSTOM END OF PAGE-------
2 Z. Shen et al.
37], layout detection [38, 22], table detection [26], and scene text detection [4].
A generalized learning-based framework dramatically reduces the need for the
manualspecificationofcomplicatedrules,whichisthestatusquowithtraditional
methods. DL has the potential to transform DIA pipelines and benefit a broad
spectrum of large-scale document digitization projects.
However, there are several practical difficulties for taking advantages of re-
cent advances in DL-based methods: 1) DL models are notoriously convoluted
for reuse and extension. Existing models are developed using distinct frame-
works like TensorFlow [1] or PyTorch [24], and the high-level parameters can
be obfuscated by implementation details [8]. It can be a time-consuming and
frustrating experience to debug, reproduce, and adapt existing models for DIA,
and many researchers who would benefit the most from using these methods lack
the technical background to implement them from scratch. 2) Document images
contain diverse and disparate patterns across domains, and customized training
is often required to achieve a desirable detection accuracy. Currently there is no
full-fledged infrastructure for easily curating the target document image datasets
and fine-tuning or re-training the models. 3) DIA usually requires a sequence of
modelsandotherprocessingtoobtainthefinaloutputs.Oftenresearchteamsuse
DL models and then perform further document analyses in separate processes,
and these pipelines are not documented in any central location (and often not
documented at all). This makes it difficult for research teams to learn about how
full pipelines are implemented and leads them to invest significant resources in
reinventing the DIA wheel.
LayoutParserprovidesaunifiedtoolkittosupportDL-baseddocumentimage
analysisandprocessing.Toaddresstheaforementionedchallenges,LayoutParser
is built with the following components:
1. Anoff-the-shelftoolkitforapplyingDLmodelsforlayoutdetection,character
recognition, and other DIA tasks (Section 3)
2. A rich repository of pre-trained neural network models (Model Zoo) that
underlies the off-the-shelf usage
3. Comprehensivetoolsforefficientdocumentimagedataannotationandmodel
tuning to support different levels of customization
4. A DL model hub and community platform for the easy sharing, distribu-
tion, and discussion of DIA models and pipelines, to promote reusability,
reproducibility, and extensibility (Section 4)
The library implements simple and intuitive Python APIs without sacrificing
generalizability and versatility, and can be easily installed via pip. Its convenient
functions for handling document image data can be seamlessly integrated with
existing DIA pipelines. With detailed documentations and carefully curated
tutorials, we hope this tool will benefit a variety of end-users, and will lead to
advances in applications in both industry and academic research.
LayoutParser is well aligned with recent efforts for improving DL model
reusability in other disciplines like natural language processing [8, 34] and com-
puter vision [35], but with a focus on unique challenges in DIA. We show
LayoutParsercanbeappliedinsophisticatedandlarge-scaledigitizationprojects
-------THIS IS A CUSTOM END OF PAGE-------
LayoutParser: A Unified Toolkit for DL-Based DIA 3
that require precision, efficiency, and robustness, as well as simple and light-
weight document processing tasks focusing on efficacy and flexibility (Section 5).
LayoutParser is being actively mainta
This could simply be \n, or \f to clearly indicate a page change, or <!-- PAGE BREAK --> for seamless injection in a Markdown viewer without a visual effect.
Extract images from the PDF
You can extract images from your PDFs with a choice of three different solutions:
- rapidOCR (lightweight Optical Character Recognition tool)
- Tesseract (OCR tool with high precision)
- Multimodal language model
You can tune these functions to choose the output format of the extracted images among html, markdown or text
The result is inserted between the last and the second-to-last paragraphs of text of the page.
Extract images from the PDF with rapidOCR:
%pip install -qU rapidocr-onnxruntime
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers import RapidOCRBlobParser
loader = PDFPlumberLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="markdown-img",
images_parser=RapidOCRBlobParser(),
)
docs = loader.load()
print(docs[5].page_content)
6 Z. Shen et al.
Fig.2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the co-
ordinateinformationandextrafeatureslikeblocktext,types,andreadingorders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
thelayoutelements.Indocumentimageanalysispipelines,variouspost-processing
on the layout analysis model outputs is usually required to obtain the final
outputs.Traditionally,thisrequiresexportingDLmodeloutputsandthenloading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.

Be careful, RapidOCR is designed to work with Chinese and English, not other languages.
Extract images from the PDF with Tesseract:
%pip install -qU pytesseract
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers import TesseractBlobParser
loader = PDFPlumberLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="html-img",
images_parser=TesseractBlobParser(),
)
docs = loader.load()
print(docs[5].page_content)
6 Z. Shen et al.
Fig.2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the co-
ordinateinformationandextrafeatureslikeblocktext,types,andreadingorders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
thelayoutelements.Indocumentimageanalysispipelines,variouspost-processing
on the layout analysis model outputs is usually required to obtain the final
outputs.Traditionally,thisrequiresexportingDLmodeloutputsandthenloading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.
<img alt="Coordinate
textblock
x-interval
JeAsaqul-A
Coordinate
+
Extra features
Rectangle
Quadrilateral
Block
Text
Block
Type
Reading
Order
layout
[ coordinatel textblock1 |
'
“y textblock2 , layout1 ]
A list of the layout elements
The same transformation and operation APIs src="#" />
Extract images from the PDF with multimodal model:
%pip install -qU langchain_openai
Note: you may need to restart the kernel to use updated packages.
import os
from dotenv import load_dotenv
load_dotenv()
True
from getpass import getpass
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key =")
from langchain_community.document_loaders.parsers import LLMImageBlobParser
from langchain_openai import ChatOpenAI
loader = PDFPlumberLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="markdown-img",
images_parser=LLMImageBlobParser(model=ChatOpenAI(model="gpt-4o", max_tokens=1024)),
)
docs = loader.load()
print(docs[5].page_content)
6 Z. Shen et al.
Fig.2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the co-
ordinateinformationandextrafeatureslikeblocktext,types,andreadingorders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
thelayoutelements.Indocumentimageanalysispipelines,variouspost-processing
on the layout analysis model outputs is usually required to obtain the final
outputs.Traditionally,thisrequiresexportingDLmodeloutputsandthenloading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.
![**Image Summary:**
Diagram illustrating coordinate, text block, and layout elements with transformation and operation APIs. Includes coordinate intervals, rectangles, quadrilaterals, and extra features like block text, block type, and reading order.
**Extracted Text:**
\`\`\`
Coordinate
(x1, y1)
Rectangle
(x2, y2)
(x1, y1)
(x2, y2)
(x4, y4)
(x3, y3)
Quadrilateral
The same transformation and operation APIs
textblock
Coordinate
+
Extra features
Block Text
Block Type
Reading Order
…
layout
[ coordinate1, textblock1, ...
…, textblock2, layout1 \\]
A list of the layout elements
x- interval
st a rt
start
y-interval
en d
end
\`\`\`](#)
Working with Files
Many document loaders involve parsing files. The difference between such loaders usually stems from how the file is parsed, rather than how the file is loaded. For example, you can use open
to read the binary content of either a PDF or a markdown file, but you need different parsing logic to convert that binary data into text.
As a result, it can be helpful to decouple the parsing logic from the loading logic, which makes it easier to re-use a given parser regardless of how the data was loaded. You can use this strategy to analyze different files, with the same parsing parameters.
from langchain_community.document_loaders import FileSystemBlobLoader
from langchain_community.document_loaders.generic import GenericLoader
from langchain_community.document_loaders.parsers import PDFPlumberParser
loader = GenericLoader(
blob_loader=FileSystemBlobLoader(
path="./example_data/",
glob="*.pdf",
),
blob_parser=PDFPlumberParser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)
LayoutParser: A Unified Toolkit for Deep
Learning Based Document Image Analysis
Zejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain
Lee4, Jacob Carlson3, and Weining Li5
1 Allen Institute for AI
shannons@allenai.org
2 Brown University
ruochen zhang@brown.edu
3 Harvard University
{melissadell,jacob carlson}@fas.harvard.edu
4 University of Washington
bcgl@cs.washington.edu
5 University of Waterloo
w422li@uwaterloo.ca
Abstract. Recentadvancesindocumentimageanalysis(DIA)havebeen
primarily driven by the application of neural networks. Ideally, research
outcomescouldbeeasilydeployedinproductionandextendedforfurther
investigation. However, various factors like loosely organized codebases
and sophisticated model configurations complicate the easy reuse of im-
portantinnovationsbyawideaudience.Thoughtherehavebeenon-going
efforts to improve reusability and simplify deep learning (DL) model
developmentindisciplineslikenaturallanguageprocessingandcomputer
vision, none of them are optimized for challenges in the domain of DIA.
This represents a major gap in the existing toolkit, as DIA is central to
academicresearchacross awiderangeof disciplinesinthesocialsciences
and humanities. This paper introduces LayoutParser, an open-source
library for streamlining the usage of DL in DIA research and applica-
tions. The core LayoutParser library comes with a set of simple and
intuitiveinterfacesforapplyingandcustomizingDLmodelsforlayoutde-
tection,characterrecognition,andmanyotherdocumentprocessingtasks.
To promote extensibility, LayoutParser also incorporates a community
platform for sharing both pre-trained models and full document digiti-
zation pipelines. We demonstrate that LayoutParser is helpful for both
lightweight and large-scale digitization pipelines in real-word use cases.
The library is publicly available at https://layout-parser.github.io.
Keywords: DocumentImageAnalysis·DeepLearning·LayoutAnalysis
· Character Recognition · Open Source library · Toolkit.
1 Introduction
Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of
documentimageanalysis(DIA)tasksincludingdocumentimageclassification[11,
1202 nuJ 12 ]VC.sc[ 2v84351.3012:viXra
{'author': '',
'creationdate': '2021-06-22T01:27:10+00:00',
'creator': 'LaTeX with hyperref',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'producer': 'pdfTeX-1.40.21',
'subject': '',
'title': '',
'trapped': 'False',
'source': 'example_data/layout-parser-paper.pdf',
'file_path': 'example_data/layout-parser-paper.pdf',
'total_pages': 16,
'page': 0}
It is possible to work with files from cloud storage.
from langchain_community.document_loaders import CloudBlobLoader
from langchain_community.document_loaders.generic import GenericLoader
loader = GenericLoader(
blob_loader=CloudBlobLoader(
url="s3://mybucket", # Supports s3://, az://, gs://, file:// schemes.
glob="*.pdf",
),
blob_parser=PDFPlumberParser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)
Related
- Document loader conceptual guide
- Document loader how-to guides