- Notifications
You must be signed in to change notification settings - Fork0
Open source libraries and APIs to build custom preprocessing pipelines for labeling, training, or production machine learning pipelines.
License
ajaycode/unstructured
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Unstructured wants to make it easier to connect to your data…and we need your help! We’re excited to announce acompetition focused on improving Unstructured's ability to seamlessly process data from the sources you care about most.
The competition starts now and continues through March 10...and most importantly, we're offering cash prizes! Please join our community Slack to participate and follow along
Theunstructured
library provides open-source components for pre-processing text documentssuch asPDFs,HTML andWord Documents. These components are packaged asbricks 🧱, which provideusers the building blocks they need to build pipelines targeted at the documents they careabout. Bricks in the library fall into three categories:
- 🧩Partitioning bricks that break raw documents down into standard, structuredelements.
- 🧹Cleaning bricks that remove unwanted text from documents, such as boilerplate andsentencefragments.
- 🎭Staging bricks that format data for downstream tasks, such as ML inferenceand data labeling.
Use the following instructions to get up and running withunstructured
and test yourinstallation.
- Install the Python SDK with
pip install "unstructured[local-inference]"
- If you do not need to process PDFs or images, you can runpip install unstructured
- Install the following system dependencies if they are not already available on your system.Depending on what document types you're parsing, you may not need all of these.
libmagic-dev
(filetype detection)poppler-utils
(images and PDFs)tesseract-ocr
(images and PDFs)libreoffice
(MS Office docs)
- If you are parsing PDFs, run the following to install the
detectron2
model, whichunstructured
uses for layout detection:pip install "detectron2@git+https://github.com/facebookresearch/detectron2.git@v0.6#egg=detectron2"
At this point, you should be able to run the following code:
fromunstructured.partition.autoimportpartitionelements=partition(filename="example-docs/fake-email.eml")print("\n\n".join([str(el)forelinelements]))
And if you installed withlocal-inference
, you should be able to run this as well:
fromunstructured.partition.autoimportpartitionelements=partition("example-docs/layout-parser-paper.pdf")print("\n\n".join([str(el)forelinelements]))
The following instructions are intended to help you get up and running withunstructured
locally if you are planning to contribute to the project.
Using
pyenv
to manage virtualenv's is recommended but not necessaryCreate a virtualenv to work in and activate it, e.g. for one named
unstructured
:pyenv virtualenv 3.8.15 unstructured
pyenv activate unstructured
Run
make install
Optional:
- To install models and dependencies for processing images and PDFs locally, run
make install-local-inference
. - For processing image files,
tesseract
is required. Seehere for installation instructions. - For processing PDF files,
tesseract
andpoppler
are required. Thepdf2image docs have instructions on installingpoppler
across various platforms.
- To install models and dependencies for processing images and PDFs locally, run
You can run thisColab notebook to run the examples below.
The following examples show how to get started with theunstructured
library.You can parseTXT,HTML,PDF,EML,DOC,DOCX,PPT,PPTX,JPG,andPNG documents with one line of code!
See ourdocumentation page for a full descriptionof the features in the library.
The easiest way to parse a document in unstructured is to use thepartition
brick. If youusepartition
brick,unstructured
will detect the file type and route it to the appropriatefile-specific partitioning brick.If you are using thepartition
brick, you may need to install additional parameters viapip install unstructured[local-inference]
. Ensure you first installlibmagic
using theinstructions outlinedherepartition
will always apply the default arguments. If you needadvanced features, use a document-specific brick. Thepartition
brick currently works for.txt
,.doc
,.docx
,.ppt
,.pptx
,.jpg
,.png
,.eml
,.html
, and.pdf
documents.
fromunstructured.partition.autoimportpartitionelements=partition("example-docs/layout-parser-paper.pdf")
Runprint("\n\n".join([str(el) for el in elements]))
to get a string representation of theoutput, which looks like:
LayoutParser : A Unified Toolkit for Deep Learning Based Document Image AnalysisZejiang Shen 1 ( (cid:0) ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , andWeining Li 5Abstract. Recent advances in document image analysis (DIA) have been primarily driven by the application of neuralnetworks. Ideally, research outcomes could be easily deployed in production and extended for further investigation.However, various factors like loosely organized codebases and sophisticated model configurations complicate the easyreuse of im- portant innovations by a wide audience. Though there have been on-going efforts to improve reusability andsimplify deep learning (DL) model development in disciplines like natural language processing and computer vision, noneof them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIAis central to academic research across a wide range of disciplines in the social sciences and humanities. This paperintroduces LayoutParser , an open-source library for streamlining the usage of DL in DIA research and applica- tions.The core LayoutParser library comes with a set of simple and intuitive interfaces for applying and customizing DL modelsfor layout de- tection, character recognition, and many other document processing tasks. To promote extensibility,LayoutParser also incorporates a community platform for sharing both pre-trained models and full document digiti- zationpipelines. We demonstrate that LayoutParser is helpful for both lightweight and large-scale digitization pipelines inreal-word use cases. The library is publicly available at https://layout-parser.github.ioKeywords: Document Image Analysis · Deep Learning · Layout Analysis · Character Recognition · Open Source library ·Toolkit.IntroductionDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of document image analysis (DIA) tasksincluding document image classification [11,
You can parse an HTML document using the following workflow:
fromunstructured.partition.htmlimportpartition_htmlelements=partition_html("example-docs/example-10k.html")print("\n\n".join([str(el)forelinelements[:5]]))
The print statement will show the following text:
UNITED STATESSECURITIES AND EXCHANGE COMMISSIONWashington, D.C. 20549FORM 10-KANNUAL REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934
Andelements
will be a list of elements in the HTML document, similar to the following:
[<unstructured.documents.elements.Titleat0x169cbe820>,<unstructured.documents.elements.NarrativeTextat0x169cbe8e0>,<unstructured.documents.elements.NarrativeTextat0x169cbe3a0>]
You can use the following workflow to parse PDF documents.
fromunstructured.partition.pdfimportpartition_pdfelements=partition_pdf("example-docs/layout-parser-paper.pdf")
The output will look the same as the example from the document parsing section above.
Thepartition_email
function withinunstructured
is helpful for parsing.eml
files. Commone-mail clients such as Microsoft Outlook and Gmail support exporting e-mails as.eml
files.partition_email
accepts filenames, file-like object, and raw text as input. The followingthree snippets for parsing.eml
files are equivalent:
fromunstructured.partition.emailimportpartition_emailelements=partition_email(filename="example-docs/fake-email.eml")withopen("example-docs/fake-email.eml","r")asf:elements=partition_email(file=f)withopen("example-docs/fake-email.eml","r")asf:text=f.read()elements=partition_email(text=text)
Theelements
output will look like the following:
[<unstructured.documents.html.HTMLNarrativeTextat0x13ab14370>,<unstructured.documents.html.HTMLTitleat0x106877970>,<unstructured.documents.html.HTMLListItemat0x1068776a0>,<unstructured.documents.html.HTMLListItemat0x13fe4b0a0>]
Runprint("\n\n".join([str(el) for el in elements]))
to get a string representation of theoutput, which looks like:
Thisisatestemailtouseforunittests.Importantpoints:RosesareredVioletsareblue
Thepartition_text
function withinunstructured
can be used to parse simpletext files into elements.
partition_text
accepts filenames, file-like object, and raw text as input. The following three snippets are for parsing text files:
fromunstructured.partition.textimportpartition_textelements=partition_text(filename="example-docs/fake-text.txt")withopen("example-docs/fake-text.txt","r")asf:elements=partition_text(file=f)withopen("example-docs/fake-text.txt","r")asf:text=f.read()elements=partition_text(text=text)
Theelements
output will look like the following:
[<unstructured.documents.html.HTMLNarrativeTextat0x13ab14370>,<unstructured.documents.html.HTMLTitleat0x106877970>,<unstructured.documents.html.HTMLListItemat0x1068776a0>,<unstructured.documents.html.HTMLListItemat0x13fe4b0a0>]
Runprint("\n\n".join([str(el) for el in elements]))
to get a string representation of theoutput, which looks like:
Thisisatestdocumenttouseforunittests.Importantpoints:HamburgersaredeliciousDogsarethebestIlovefuzzyblankets
See oursecurity policy forinformation on how to report security vulnerabilities.
Section | Description |
---|---|
Company Website | Unstructured.io product and company info |
Documentation | Full API documentation |
Batch Processing | Ingesting batches of documents through Unstructured |
About
Open source libraries and APIs to build custom preprocessing pipelines for labeling, training, or production machine learning pipelines.
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Languages
- HTML87.9%
- Python11.5%
- Other0.6%