Show HN: OCR pipeline for ML training (tables, diagrams, math, multilingual)
github.comHi HN,
I’ve been working on an OCR pipeline specifically optimized for machine learning dataset preparation. It’s designed to process complex academic materials — including math formulas, tables, figures, and multilingual text — and output clean, structured formats like JSON and Markdown.
Some features: • Multi-stage OCR combining DocLayout-YOLO, Google Vision, MathPix, and Gemini Pro Vision • Extracts and understands diagrams, tables, LaTeX-style math, and multilingual text (Japanese/Korean/English) • Highly tuned for ML training pipelines, including dataset generation and preprocessing for RAG or fine-tuning tasks
Sample outputs and real exam-based examples are included (EJU Biology, UTokyo Math, etc.) Would love to hear any feedback or ideas for improvement.
LLMs for OCR is super risky because just as much as they can fix OCR mistakes, they can inadvertently "fix" correct stuff too and hallucinate instead.
Its that xerox bug on steroids, where scanned pages would get their digits swapped by other digits...
I'd want to see some proper hallucination analysis.
Yeah, hallucination part was also one thing I was worry about. So I make LLM only run after OCR step, and I put simple check to not change correct text. I will try to show real examples and hallucination rate too. Thanks for feedback!
This project was just hobby and my first time posting something. I didn’t imagine people would care this much… Next time I will prepare better before sharing.
Also, what about prompt injection? With an LLM as far as I'm aware there is never a clear separation between instruction and the data to be processed.
Yeah, prompt injection is good point. For now, I try separate instruction and data by using JSON format, and run it in sandbox. Not perfect maybe, but I will try add small explanation in README so people can check it better.
For the more curious: there is also Unstract open source for pipeline. Lets us plug in your AI stack eg. OS llm models, vector db, ocr parsers etc.
https://github.com/Zipstack/unstract
> Never change the original language of any text. Keep Korean in Korean, Japanese in Japanese, and English in English.
I love the double prompting to keep GPT from translating the text. I've definitely had this problem before, and spent ages trying to prompt it into not randomly translating the text.
Yeah — I ran into that exact problem during early testing. The prompt has since been adjusted to prevent GPT from auto-translating non-English text (Korean, Japanese, etc.).
If it still misbehaves in any edge cases, feel free to open an issue on GitHub — happy to patch it up.
What’s the use of using generative AI to OCR the text?
Great question — I’m using traditional OCR engines for the initial text extraction (e.g., MathPix, Google Vision), but then I apply generative AI models in a second stage to refine the output. This includes removing noisy or irrelevant elements, normalizing format inconsistencies, and improving alignment across multi-modal inputs.
In addition, for figures and diagrams, I use Gemini Pro Vision not just to extract the content, but to generate context-aware, structured descriptions that are better suited as ML training input — rather than just dumping raw image text.
So in short, generative AI is used here more as a smart post-processing layer to enhance the usability and semantic clarity of the OCR outputs.
> Built With: DocLayout-YOLO, Google Vision API, Gemini Pro Vision, MathPix OCR, OpenAI API, OpenCV, and more.
the whole pipeline is not open source
Yep — some components currently rely on external APIs (e.g. OpenAI, MathPix), primarily for stability and ease of deployment during early release. But I’m planning to support fully local inference in the future to eliminate API key dependency.
The local pipeline would include:
• Tesseract or TrOCR for general OCR
• Pix2Struct, Donut, or DocTR for document structure understanding
• OpenAI CLIP for image-text semantic alignment
• Gemma / Phi / LLaMA / Mistral for downstream reasoning tasks
Goal is to make the system fully self-hostable for offline and private use.
How does this compare against marker[1]?
1: https://github.com/VikParuchuri/marker
Thanks for sharing — Marker is a great tool, especially for human-readable formatting!
In contrast, this project focuses less on preserving the visual layout for human readers, and more on extracting structured semantic data for machine learning training.
So instead of optimizing for clean Markdown or HTML, it extracts context-aware elements like:
• table data as JSON,
• math expressions in LaTeX,
• diagrams with image descriptions,
• multilingual text segments,
• and semantic roles (e.g. “question”, “explanation”, etc.)
In short: Marker is great for reading, This is built for feeding into ML pipelines — especially for tasks like question-answering, diagram reasoning, or multimodal pretraining.
[dead]
so you are saying i can feed my last 10 years of exam question papers and get predictions on what we will get this year?
Haha not exactly like predicting actual questions. Just trying to find patterns or what topics show up often. I made this to help my study, didn’t think people would care this much.
super great work -- do you convert math formula to latex &/or how is that or other symbolic not necessarily unicode chars handled?
Thanks a lot! Yeah, theoretically the pipeline handles math and special symbols fine, and from my testing it worked well. But I didn’t test much on other languages or encodings, so if there’s any weird behavior, please let me know and I’ll check it!
Did you ethically acquire permission to train on the data set?
Yep — this project uses a pre-trained DocLayout-YOLO model released under an open license by the original authors. No additional datasets were used for training. All sample data in the repo is either synthetic, publicly available, or user-generated specifically for testing purposes. If there are any concerns about specific models or datasets, I’m happy to review them and make adjustments as needed.
DocLayout-YOLO model is under the AGPL-3.0 license, it's not permissive. You can't have your project under the MIT license and also use copyleft software.
I’m sorry that I didn’t know that detail, thank you so much for letting me know! I’ll read AGPL-3.0 license more carefully and check if it’s okay with MIT. If not, I’ll fix license or change model. really appreciate your help!
Curious if there are plans to update this. Seems interesting.
Thanks! Yes — I’m definitely planning to update and refine the project over time.
This initial release is mostly a working prototype to demonstrate the full pipeline logic, and I’ll continue improving stability, modularity, and usability. A lot more updates are in the pipeline, so stay tuned! Feel free to open issues or suggestions anytime — feedback is always welcome!
[dead]
This is a valuable contribution. The quality of ML models heavily depends on the quality of training data, and extracting structured information from unstructured documents (like PDFs) is a critical bottleneck.
A key challenge after OCR is organizing the extracted data into a coherent knowledge structure. We've seen significant improvements in downstream ML tasks when the extracted data is organized using a hierarchical, MECE (Mutually Exclusive, Collectively Exhaustive) framework. This ensures that relationships between entities (tables, diagrams, text) are explicitly captured.
Does your pipeline include capabilities for semantic structuring of the extracted content beyond basic layout analysis? That seems like the next frontier for maximizing the value of OCR data in ML training.
Thanks for the insightful comment! You’re absolutely right — organizing extracted data into a coherent, semantically meaningful structure is critical for high-quality ML training.
Right now, the pipeline focuses on generating OCR outputs optimized for ML models by cleaning, deduplicating, and segmenting content across modalities (text, tables, figures, formulas). For diagrams and tables, we add semantic tags and preserve layout relationships to aid downstream modeling.
I’m planning to add a semantic structuring module that goes beyond basic layout analysis — something that builds hierarchical, MECE-style representations and identifies entity relationships across sections. That’s absolutely the next frontier, and I really appreciate you pointing it out.
Thanks again for the thoughtful feedback!
why are you using an LLM to reply to every comment?
Haha good catch! I’m 19 and from Korea, so I’ve been using an LLM to help with replies since my English isn’t perfect yet. But I designed and built the project myself (with help from some open models/tools) — just wanted to communicate more clearly with the community!
[Hi from Argentina!] LLM have a particular style that will make people suspictious or even angry.
One posibility is to write the answer in Korean and use autotranslation. (And post only the autotranslation.) Double check the technical terms, because autotranslation sometimes choose the wrong synonym.
Another posibility is to write the answer in English inside gmail, and gmail will highlight orthographical and gramar errors. So you can fix them.
Most people here will tolerate a few mistakes if the answer has your own personal style.
(Nice project, by the way.)
Yes, writing that is suspictious makes me angry.
>> suspitious
:( My phone does not have orthography correction, and I didn't have my notebook.
Edit: fixed typo: gave -> have
Por esa misma razón, un LLM te habría funcionado perfectamente: desplegando tus pensamientos tal como querías, pero sin las distracciones causadas por la mala ortografía o los errores gramaticales. Los LLM son herramientas —como bien sabes— que ya son esenciales y lo serán aún más con el paso del tiempo. Que algunos en esta plataforma se irriten por su uso solo significa que, eventualmente, se convertirán en los dinosaurios del futuro.
For that very reason, an LLM would have worked perfectly for you: laying out your thoughts just as you intended, but without the distractions caused by poor spelling or grammatical mistakes. LLMs are tools—as you well know—that are already essential and will become even more so over time. The fact that some people on this platform get irritated by their use just means they’ll eventually become the dinosaurs of the future.
Genuinely curious—could it be for the same reason you used a keyboard to write that comment? It’s efficient, it works. What’s the actual issue with using a tool that helps convey the intended message more clearly and quickly, as long as it reflects what he wanted to say?
why are you offended on behalf of this person? the hindsight that they're simply an English learner obviously makes me feel bad for asking the question, but i don't think it's unreasonable to think that someone who speaks entirely in ChatGPT paragraphs might be a bot, spammer, or the like