Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

OCR, layout analysis, reading order, table recognition in 90+ languages

License

NotificationsYou must be signed in to change notification settings

VikParuchuri/surya

Repository files navigation

Surya is a document OCR toolkit that does:

  • OCR in 90+ languages that benchmarks favorably vs cloud services
  • Line-level text detection in any language
  • Layout analysis (table, image, header, etc detection)
  • Reading order detection
  • Table recognition (detecting rows/columns)
  • LaTeX OCR

It works on a range of documents (seeusage andbenchmarks for more details).

DetectionOCR
LayoutReading Order
Table RecognitionLaTeX OCR

Surya is named for theHindu sun god, who has universal vision.

Community

Discord is where we discuss future development.

Examples

NameDetectionOCRLayoutOrderTable Rec
JapaneseImageImageImageImageImage
ChineseImageImageImageImage
HindiImageImageImageImage
ArabicImageImageImageImage
Chinese + HindiImageImageImageImage
PresentationImageImageImageImageImage
Scientific PaperImageImageImageImageImage
Scanned DocumentImageImageImageImageImage
New York TimesImageImageImageImage
Scanned FormImageImageImageImageImage
TextbookImageImageImageImage

Hosted API

There is a hosted API for all surya models availablehere:

  • Works with PDF, images, word docs, and powerpoints
  • Consistent speed, with no latency spikes
  • High reliability and uptime

Commercial usage

I want surya to be as widely accessible as possible, while still funding my development/training costs. Research and personal usage is always okay, but there are some restrictions on commercial usage.

The weights for the models are licensedcc-by-nc-sa-4.0, but I will waive that for any organization under $5M USD in gross revenue in the most recent 12-month period AND under $5M in lifetime VC/angel funding raised. You also must not be competitive with theDatalab API. If you want to remove the GPL license requirements (dual-license) and/or use the weights commercially over the revenue limit, check out the optionshere.

Installation

You'll need python 3.10+ and PyTorch. You may need to install the CPU version of torch first if you're not using a Mac or a GPU machine. Seehere for more details.

Install with:

pip install surya-ocr

Model weights will automatically download the first time you run surya.

Usage

  • Inspect the settings insurya/settings.py. You can override any settings with environment variables.
  • Your torch device will be automatically detected, but you can override this. For example,TORCH_DEVICE=cuda.

Interactive App

I've included a streamlit app that lets you interactively try Surya on images or PDF files. Run it with:

pip install streamlit pdftextsurya_gui

OCR (text recognition)

This command will write out a json file with the detected text and bboxes:

surya_ocr DATA_PATH
  • DATA_PATH can be an image, pdf, or folder of images/pdfs
  • --langs is an optional (but recommended) argument that specifies the language(s) to use for OCR. You can comma separate multiple languages. Use the language name or two-letter ISO code fromhere. Surya supports the 90+ languages found insurya/languages.py.
  • --lang_file if you want to use a different language for different PDFs/images, you can optionally specify languages in a file. The format is a JSON dict with the keys being filenames and the values as a list, like{"file1.pdf": ["en", "hi"], "file2.pdf": ["en"]}.
  • --images will save images of the pages and detected text lines (optional)
  • --output_dir specifies the directory to save results to instead of the default
  • --page_range specifies the page range to process in the PDF, specified as a single number, a comma separated list, a range, or comma separated ranges - example:0,5-10,20.

Theresults.json file will contain a json dictionary where the keys are the input filenames without extensions. Each value will be a list of dictionaries, one per page of the input document. Each page dictionary contains:

  • text_lines - the detected text and bounding boxes for each line
    • text - the text in the line
    • confidence - the confidence of the model in the detected text (0-1)
    • polygon - the polygon for the text line in (x1, y1), (x2, y2), (x3, y3), (x4, y4) format. The points are in clockwise order from the top left.
    • bbox - the axis-aligned rectangle for the text line in (x1, y1, x2, y2) format. (x1, y1) is the top left corner, and (x2, y2) is the bottom right corner.
  • languages - the languages specified for the page
  • page - the page number in the file
  • image_bbox - the bbox for the image in (x1, y1, x2, y2) format. (x1, y1) is the top left corner, and (x2, y2) is the bottom right corner. All line bboxes will be contained within this bbox.

Performance tips

Setting theRECOGNITION_BATCH_SIZE env var properly will make a big difference when using a GPU. Each batch item will use40MB of VRAM, so very high batch sizes are possible. The default is a batch size512, which will use about 20GB of VRAM. Depending on your CPU core count, it may help, too - the default CPU batch size is32.

From python

fromPILimportImagefromsurya.recognitionimportRecognitionPredictorfromsurya.detectionimportDetectionPredictorimage=Image.open(IMAGE_PATH)langs= ["en"]# Replace with your languages or pass None (recommended to use None)recognition_predictor=RecognitionPredictor()detection_predictor=DetectionPredictor()predictions=recognition_predictor([image], [langs],detection_predictor)

Compilation

The following models have support for compilation. You will need to set the following environment variables to enable compilation:

  • Recognition:COMPILE_RECOGNITION=true
  • Detection:COMPILE_DETECTOR=true
  • Layout:COMPILE_LAYOUT=true
  • Table recognition:COMPILE_TABLE_REC=true

Alternatively, you can also setCOMPILE_ALL=true which will compile all models.

Here are the speedups on an A10 GPU:

ModelTime per page (s)Compiled time per page (s)Speedup (%)
Recognition0.6575560.5626514.43314334
Detection0.1088080.105213.306742151
Layout0.273190.270630.93707676
Table recognition0.02190.0193811.50684932

Text line detection

This command will write out a json file with the detected bboxes.

surya_detect DATA_PATH
  • DATA_PATH can be an image, pdf, or folder of images/pdfs
  • --images will save images of the pages and detected text lines (optional)
  • --output_dir specifies the directory to save results to instead of the default
  • --page_range specifies the page range to process in the PDF, specified as a single number, a comma separated list, a range, or comma separated ranges - example:0,5-10,20.

Theresults.json file will contain a json dictionary where the keys are the input filenames without extensions. Each value will be a list of dictionaries, one per page of the input document. Each page dictionary contains:

  • bboxes - detected bounding boxes for text
    • bbox - the axis-aligned rectangle for the text line in (x1, y1, x2, y2) format. (x1, y1) is the top left corner, and (x2, y2) is the bottom right corner.
    • polygon - the polygon for the text line in (x1, y1), (x2, y2), (x3, y3), (x4, y4) format. The points are in clockwise order from the top left.
    • confidence - the confidence of the model in the detected text (0-1)
  • vertical_lines - vertical lines detected in the document
    • bbox - the axis-aligned line coordinates.
  • page - the page number in the file
  • image_bbox - the bbox for the image in (x1, y1, x2, y2) format. (x1, y1) is the top left corner, and (x2, y2) is the bottom right corner. All line bboxes will be contained within this bbox.

Performance tips

Setting theDETECTOR_BATCH_SIZE env var properly will make a big difference when using a GPU. Each batch item will use440MB of VRAM, so very high batch sizes are possible. The default is a batch size36, which will use about 16GB of VRAM. Depending on your CPU core count, it might help, too - the default CPU batch size is6.

From python

fromPILimportImagefromsurya.detectionimportDetectionPredictorimage=Image.open(IMAGE_PATH)det_predictor=DetectionPredictor()# predictions is a list of dicts, one per imagepredictions=det_predictor([image])

Layout and reading order

This command will write out a json file with the detected layout and reading order.

surya_layout DATA_PATH
  • DATA_PATH can be an image, pdf, or folder of images/pdfs
  • --images will save images of the pages and detected text lines (optional)
  • --output_dir specifies the directory to save results to instead of the default
  • --page_range specifies the page range to process in the PDF, specified as a single number, a comma separated list, a range, or comma separated ranges - example:0,5-10,20.

Theresults.json file will contain a json dictionary where the keys are the input filenames without extensions. Each value will be a list of dictionaries, one per page of the input document. Each page dictionary contains:

  • bboxes - detected bounding boxes for text
    • bbox - the axis-aligned rectangle for the text line in (x1, y1, x2, y2) format. (x1, y1) is the top left corner, and (x2, y2) is the bottom right corner.
    • polygon - the polygon for the text line in (x1, y1), (x2, y2), (x3, y3), (x4, y4) format. The points are in clockwise order from the top left.
    • position - the reading order of the box.
    • label - the label for the bbox. One ofCaption,Footnote,Formula,List-item,Page-footer,Page-header,Picture,Figure,Section-header,Table,Form,Table-of-contents,Handwriting,Text,Text-inline-math.
    • top_k - the top-k other potential labels for the box. A dictionary with labels as keys and confidences as values.
  • page - the page number in the file
  • image_bbox - the bbox for the image in (x1, y1, x2, y2) format. (x1, y1) is the top left corner, and (x2, y2) is the bottom right corner. All line bboxes will be contained within this bbox.

Performance tips

Setting theLAYOUT_BATCH_SIZE env var properly will make a big difference when using a GPU. Each batch item will use220MB of VRAM, so very high batch sizes are possible. The default is a batch size32, which will use about 7GB of VRAM. Depending on your CPU core count, it might help, too - the default CPU batch size is4.

From python

fromPILimportImagefromsurya.layoutimportLayoutPredictorimage=Image.open(IMAGE_PATH)layout_predictor=LayoutPredictor()# layout_predictions is a list of dicts, one per imagelayout_predictions=layout_predictor([image])

Table Recognition

This command will write out a json file with the detected table cells and row/column ids, along with row/column bounding boxes. If you want to get cell positions and text, along with nice formatting, check out themarker repo. You can use theTableConverter to detect and extract tables in images and PDFs. It supports output in json (with bboxes), markdown, and html.

surya_table DATA_PATH
  • DATA_PATH can be an image, pdf, or folder of images/pdfs
  • --images will save images of the pages and detected table cells + rows and columns (optional)
  • --output_dir specifies the directory to save results to instead of the default
  • --page_range specifies the page range to process in the PDF, specified as a single number, a comma separated list, a range, or comma separated ranges - example:0,5-10,20.
  • --detect_boxes specifies if cells should be detected. By default, they're pulled out of the PDF, but this is not always possible.
  • --skip_table_detection tells table recognition not to detect tables first. Use this if your image is already cropped to a table.

Theresults.json file will contain a json dictionary where the keys are the input filenames without extensions. Each value will be a list of dictionaries, one per page of the input document. Each page dictionary contains:

  • rows - detected table rows
    • bbox - the bounding box of the table row
    • row_id - the id of the row
    • is_header - if it is a header row.
  • cols - detected table columns
    • bbox - the bounding box of the table column
    • col_id- the id of the column
    • is_header - if it is a header column
  • cells - detected table cells
    • bbox - the axis-aligned rectangle for the text line in (x1, y1, x2, y2) format. (x1, y1) is the top left corner, and (x2, y2) is the bottom right corner.
    • text - if text could be pulled out of the pdf, the text of this cell.
    • row_id - the id of the row the cell belongs to.
    • col_id - the id of the column the cell belongs to.
    • colspan - the number of columns spanned by the cell.
    • rowspan - the number of rows spanned by the cell.
    • is_header - whether it is a header cell.
  • page - the page number in the file
  • table_idx - the index of the table on the page (sorted in vertical order)
  • image_bbox - the bbox for the image in (x1, y1, x2, y2) format. (x1, y1) is the top left corner, and (x2, y2) is the bottom right corner. All line bboxes will be contained within this bbox.

Performance tips

Setting theTABLE_REC_BATCH_SIZE env var properly will make a big difference when using a GPU. Each batch item will use150MB of VRAM, so very high batch sizes are possible. The default is a batch size64, which will use about 10GB of VRAM. Depending on your CPU core count, it might help, too - the default CPU batch size is8.

From python

fromPILimportImagefromsurya.table_recimportTableRecPredictorimage=Image.open(IMAGE_PATH)table_rec_predictor=TableRecPredictor()table_predictions=table_rec_predictor([image])

LaTeX OCR

This command will write out a json file with the LaTeX of the equations. You must pass in images that are already cropped to the equations. You can do this by running the layout model, then cropping, if you want.

surya_latex_ocr DATA_PATH
  • DATA_PATH can be an image, pdf, or folder of images/pdfs
  • --output_dir specifies the directory to save results to instead of the default
  • --page_range specifies the page range to process in the PDF, specified as a single number, a comma separated list, a range, or comma separated ranges - example:0,5-10,20.

Theresults.json file will contain a json dictionary where the keys are the input filenames without extensions. Each value will be a list of dictionaries, one per page of the input document. Each page dictionary contains:

  • text - the detected LaTeX text - it will be in KaTeX compatible LaTeX, with<math display="block">...</math> and<math>...</math> as delimiters.
  • confidence - the prediction confidence from 0-1.
  • page - the page number in the file

From python

fromPILimportImagefromsurya.texifyimportTexifyPredictorimage=Image.open(IMAGE_PATH)predictor=TexifyPredictor()predictor([image])

Interactive app

You can also run a special interactive app that lets you select equations and OCR them (kind of like MathPix snip) with:

pip install streamlit==1.40 streamlit-drawable-canvas-jsretrytexify_gui

Limitations

  • This is specialized for document OCR. It will likely not work on photos or other images.
  • It is for printed text, not handwriting (though it may work on some handwriting).
  • The text detection model has trained itself to ignore advertisements.
  • You can find language support for OCR insurya/languages.py. Text detection, layout analysis, and reading order will work with any language.

Troubleshooting

If OCR isn't working properly:

  • Try increasing resolution of the image so the text is bigger. If the resolution is already very high, try decreasing it to no more than a2048px width.
  • Preprocessing the image (binarizing, deskewing, etc) can help with very old/blurry images.
  • You can adjustDETECTOR_BLANK_THRESHOLD andDETECTOR_TEXT_THRESHOLD if you don't get good results.DETECTOR_BLANK_THRESHOLD controls the space between lines - any prediction below this number will be considered blank space.DETECTOR_TEXT_THRESHOLD controls how text is joined - any number above this is considered text.DETECTOR_TEXT_THRESHOLD should always be higher thanDETECTOR_BLANK_THRESHOLD, and both should be in the 0-1 range. Looking at the heatmap from the debug output of the detector can tell you how to adjust these (if you see faint things that look like boxes, lower the thresholds, and if you see bboxes being joined together, raise the thresholds).

Manual install

If you want to develop surya, you can install it manually:

  • git clone https://github.com/VikParuchuri/surya.git
  • cd surya
  • poetry install - installs main and dev dependencies
  • poetry shell - activates the virtual environment

Benchmarks

OCR

Benchmark chart tesseract

ModelTime per page (s)Avg similarity (⬆)
surya.620.97
tesseract.450.88

Full language results

Tesseract is CPU-based, and surya is CPU or GPU. I tried to cost-match the resources used, so I used a 1xA6000 (48GB VRAM) for surya, and 28 CPU cores for Tesseract (same price on Lambda Labs/DigitalOcean).

Google Cloud Vision

I benchmarked OCR against Google Cloud vision since it has similar language coverage to Surya.

Benchmark chart google cloud

Full language results

Methodology

I measured normalized sentence similarity (0-1, higher is better) based on a set of real-world and synthetic pdfs. I sampled PDFs from common crawl, then filtered out the ones with bad OCR. I couldn't find PDFs for some languages, so I also generated simple synthetic PDFs for those.

I used the reference line bboxes from the PDFs with both tesseract and surya, to just evaluate the OCR quality.

For Google Cloud, I aligned the output from Google Cloud with the ground truth. I had to skip RTL languages since they didn't align well.

Text line detection

Benchmark chart

ModelTime (s)Time per page (s)precisionrecall
surya47.22850.0944520.8358570.960807
tesseract74.45460.2908380.6314980.997694

Tesseract is CPU-based, and surya is CPU or GPU. I ran the benchmarks on a system with an A10 GPU, and a 32 core CPU. This was the resource usage:

  • tesseract - 32 CPU cores, or 8 workers using 4 cores each
  • surya - 36 batch size, for 16GB VRAM usage

Methodology

Surya predicts line-level bboxes, while tesseract and others predict word-level or character-level. It's hard to find 100% correct datasets with line-level annotations. Merging bboxes can be noisy, so I chose not to use IoU as the metric for evaluation.

I instead used coverage, which calculates:

  • Precision - how well the predicted bboxes cover ground truth bboxes
  • Recall - how well ground truth bboxes cover predicted bboxes

First calculate coverage for each bbox, then add a small penalty for double coverage, since we want the detection to have non-overlapping bboxes. Anything with a coverage of 0.5 or higher is considered a match.

Then we calculate precision and recall for the whole dataset.

Layout analysis

Layout Typeprecisionrecall
Image0.912650.93976
List0.808490.86792
Table0.849570.96104
Text0.930190.94571
Title0.921020.95404

Time per image - .13 seconds on GPU (A10).

Methodology

I benchmarked the layout analysis onPublaynet, which was not in the training data. I had to align publaynet labels with the surya layout labels. I was then able to find coverage for each layout type:

  • Precision - how well the predicted bboxes cover ground truth bboxes
  • Recall - how well ground truth bboxes cover predicted bboxes

Reading Order

88% mean accuracy, and .4 seconds per image on an A10 GPU. See methodology for notes - this benchmark is not perfect measure of accuracy, and is more useful as a sanity check.

Methodology

I benchmarked the reading order on the layout dataset fromhere, which was not in the training data. Unfortunately, this dataset is fairly noisy, and not all the labels are correct. It was very hard to find a dataset annotated with reading order and also layout information. I wanted to avoid using a cloud service for the ground truth.

The accuracy is computed by finding if each pair of layout boxes is in the correct order, then taking the % that are correct.

Table Recognition

ModelRow IntersectionCol IntersectionTime Per Image
Surya10.986250.30202
Table transformer0.840.868570.08082

Higher is better for intersection, which the percentage of the actual row/column overlapped by the predictions. This benchmark is mostly a sanity check - there is a more rigorous one inmarker

Methodology

The benchmark uses a subset ofFintabnet from IBM. It has labeled rows and columns. After table recognition is run, the predicted rows and columns are compared to the ground truth. There is an additional penalty for predicting too many or too few rows/columns.

LaTeX OCR

Methodedit ⬇time taken (s) ⬇
texify0.12261735.6345

This inferences texify on a ground truth set of LaTeX, then does edit distance. This is a bit noisy, since 2 LaTeX strings that render the same can have different symbols in them.

Running your own benchmarks

You can benchmark the performance of surya on your machine.

  • Follow the manual install instructions above.
  • poetry install --group dev - installs dev dependencies

Text line detection

This will evaluate tesseract and surya for text line detection across a randomly sampled set of images fromdoclaynet.

python benchmark/detection.py --max_rows 256
  • --max_rows controls how many images to process for the benchmark
  • --debug will render images and detected bboxes
  • --pdf_path will let you specify a pdf to benchmark instead of the default data
  • --results_dir will let you specify a directory to save results to instead of the default one

Text recognition

This will evaluate surya and optionally tesseract on multilingual pdfs from common crawl (with synthetic data for missing languages).

python benchmark/recognition.py --tesseract
  • --max_rows controls how many images to process for the benchmark

  • --debug 2 will render images with detected text

  • --results_dir will let you specify a directory to save results to instead of the default one

  • --tesseract will run the benchmark with tesseract. You have to runsudo apt-get install tesseract-ocr-all to install all tesseract data, and setTESSDATA_PREFIX to the path to the tesseract data folder.

  • SetRECOGNITION_BATCH_SIZE=864 to use the same batch size as the benchmark.

  • SetRECOGNITION_BENCH_DATASET_NAME=vikp/rec_bench_hist to use the historical document data for benchmarking. This data comes from thetapuscorpus.

Layout analysis

This will evaluate surya on the publaynet dataset.

python benchmark/layout.py
  • --max_rows controls how many images to process for the benchmark
  • --debug will render images with detected text
  • --results_dir will let you specify a directory to save results to instead of the default one

Reading Order

python benchmark/ordering.py
  • --max_rows controls how many images to process for the benchmark
  • --debug will render images with detected text
  • --results_dir will let you specify a directory to save results to instead of the default one

Table Recognition

python benchmark/table_recognition.py --max_rows 1024 --tatr
  • --max_rows controls how many images to process for the benchmark
  • --debug will render images with detected text
  • --results_dir will let you specify a directory to save results to instead of the default one
  • --tatr specifies whether to also run table transformer

LaTeX OCR

python benchmark/texify.py --max_rows 128
  • --max_rows controls how many images to process for the benchmark
  • --results_dir will let you specify a directory to save results to instead of the default one

Training

Text detection was trained on 4x A6000s for 3 days. It used a diverse set of images as training data. It was trained from scratch using a modified efficientvit architecture for semantic segmentation.

Text recognition was trained on 4x A6000s for 2 weeks. It was trained using a modified donut model (GQA, MoE layer, UTF-16 decoding, layer config changes).

Thanks

This work would not have been possible without amazing open source AI work:

Thank you to everyone who makes open source AI possible.

About

OCR, layout analysis, reading order, table recognition in 90+ languages

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp