Show HN: Zonformat– 35–60% fewer LLM tokens using zero-overhead notation

zonformat.org

2 points by ronibhakta 21 hours ago

hey HN!

Roni from India — ex-Google Summer of Code(GSoC) @ internet archive, full-stack dev.

got frustrated watching json bloat my openai/claude bills by 50%+ on redundant syntax, so i built ZON over a few weekends: zero-overhead notation that compresses payloads ~50% vs json (692 tokens vs 1,300 on gpt-5-nano benchmarks) while staying 100% human-readable and lossless.

Playground -> https://zonformat.org/playground ROI calculator -> https://zonformat.org/savings

<2kb typescript lib with 100% test coverage. drop-in for openai sdk, langchain js/ts, claude, llama.cpp, streaming, zod schemas—validates llm outputs at runtime with zero extra cost.

Benchmarks -> https://zonformat.org/#benchmarks

try it: npm i zon-format or uv add zon-format, then encode/decode in <10s (code in readme). full site with benchmarks: https://zonformat.org

github → https://github.com/ZON-Format

harsh feedback on perf, edge cases, or api very welcome. if it saves you a coffee's worth of tokens, a star would be awesome

let's make llm prompts efficient again

sahilagarwal 20 hours ago

A playground for the zon format is great, but it would be amazing to see a few examples where zon has already been integrated into the LLM and see its responses to user queries. It doesn't even need to be a playground (as that becomes costly quickly), just some examples for the user to see how the black box will work when zon is integrated.

  • ronibhakta 11 hours ago

    Thanks for the feedback, Sahil. I have the example of those right above the benchmarks on the homepage you can try here -> https://zonformat.org/

    And here are some extra for you too look at > https://zonformat.org/docs/eval-llms#real-world-benchmark

    I have attached the script and the logs comparing how ZON performs well everytime compared to TOON. and how ZON-FORMAT has a feature of Eval to get rid of LLM Hallucination.

usefulposter 20 hours ago

Let's make the English language mean something again.

If you're————————going————————to use a LLM, can you at least reformat this post to not sound like @sama?