Betaraipii is in early access. Not all features are released yet. Share feedback
Now available on AWS Marketplace

PII-safe prompts,
in one API call

Detect and sanitize PII before it reaches your LLM. Replace real data with tokens or realistic fakes. Restore original values after the model responds.

Python
# Before: raw prompt hits your LLM
prompt = "Help John Smith (john@acme.com, SSN 392-45-7810)"

# Sanitize with raipii
result = ps.sanitize(prompt, mode="fake_substitute")
# → "Help Michael Torres (m.torres@email.net, SSN 847-23-1956)"

# Restore after LLM responds
original = ps.restore(llm_response, result.session_id)
# → Real names/emails back in the response

Everything your LLM pipeline needs

Dual-engine detection

AWS Comprehend for context (names, addresses) + regex for structured PII (SSN, credit cards, JWTs).

Fake data substitution

Replace PII with realistic Faker-generated values. LLMs reason naturally over fake data.

HIPAA-ready

Enable HIPAA mode to skip Comprehend entirely. Regex-only detection, no PHI leaves your region.

Multi-turn sessions

Conversation sessions keep consistent substitutions across all turns. Same entity → same fake.

Simple, usage-based pricing

Billed by characters processed. No seat fees.

Starter

$0free forever

2M chars/month

  • Token + redact modes
  • Regex detection
  • 1hr session TTL
  • Community support
Get started

Growth

$49per month

100M chars/month

  • All substitution modes
  • Comprehend detection
  • Fake data generation
  • 24hr conversation sessions
  • Email support
Start free trial

Business

$299per month

1B chars/month

  • Everything in Growth
  • HIPAA mode
  • EU data residency
  • AWS PrivateLink
  • GDPR erasure API
  • Slack support
Contact us

Start sanitizing prompts in 5 minutes

No infrastructure. No Docker. One API key.

Try the playground