Security and trust

Security, access, and data handling built into the product.

SuperInputs is designed to keep access scoped to accounts, keep plan controls server-side, and handle uploads and exports in a predictable way.

How SuperInputs handles trust and control

These are the protections built into the product today.

Account-based access

Jobs and exports stay tied to signed-in accounts, and private routes stay out of the index.

Limited raw-upload retention

Uploads are processed into structured outputs without keeping raw ZIPs around longer than necessary.

Server-side plan controls

Plan limits and page reservations are enforced on the backend, not left to client-side checks.

Usage visibility

Billing and usage records help teams see what has been processed and what remains in a cycle.

What this means in practice

Private job history stays private

Login and extraction-history routes are intentionally excluded from indexing so crawlers do not surface account-only content.

Uploads move through a controlled pipeline

Signed uploads, background preparation, and worker-based processing help keep document processing predictable at higher volumes.

Usage is visible before large runs

Teams can understand current plan usage, remaining quota, and reserved pages instead of guessing what a batch will consume.

Frequently asked questions

These answers reflect how SuperInputs works today.

How does SuperInputs handle access to jobs and exports?

Jobs and exports are scoped to authenticated accounts, and private extraction history pages are not indexable.

Are raw uploads kept forever?

No. The product is designed around short-lived raw ZIP handling and structured output retention policies, so raw inputs do not need to stay available indefinitely.

Why is usage tracking part of the product?

Usage and billing records help with plan enforcement, accountability, and visibility into what has already been processed in a billing cycle.

Want to test the product before running larger batches?

Start free, preview the schema on a sample file, and then move to the full batch when the structure looks right.