Detect and sanitize PII before it reaches your LLM. Replace real data with tokens or realistic fakes. Restore original values after the model responds.
# Before: raw prompt hits your LLM
prompt = "Help John Smith (john@acme.com, SSN 392-45-7810)"
# Sanitize with raipii
result = ps.sanitize(prompt, mode="fake_substitute")
# → "Help Michael Torres (m.torres@email.net, SSN 847-23-1956)"
# Restore after LLM responds
original = ps.restore(llm_response, result.session_id)
# → Real names/emails back in the responseAWS Comprehend for context (names, addresses) + regex for structured PII (SSN, credit cards, JWTs).
Replace PII with realistic Faker-generated values. LLMs reason naturally over fake data.
Enable HIPAA mode to skip Comprehend entirely. Regex-only detection, no PHI leaves your region.
Conversation sessions keep consistent substitutions across all turns. Same entity → same fake.
Billed by characters processed. No seat fees.
Starter
2M chars/month
Growth
100M chars/month
Business
1B chars/month