Detect and sanitize PII before it reaches your LLM. Replace real data with tokens or realistic fakes. Restore original values after the model responds.
# Before: raw prompt hits your LLM
prompt = "Help John Smith (john@acme.com, SSN 392-45-7810)"
# Sanitize with raipii
result = ps.sanitize(prompt, mode="fake_substitute")
# → "Help Michael Torres (m.torres@email.net, SSN 847-23-1956)"
# Restore after LLM responds
original = ps.restore(llm_response, result.session_id)
# → Real names/emails back in the responseDetects both structured PII (SSN, credit cards, JWTs) and contextual entities (names, addresses) with high precision.
Replace PII with realistic synthetic values. LLMs reason naturally over fake data and produce better output.
Enable HIPAA mode to ensure no PHI is sent to any external service. All detection runs within your region.
Conversation sessions keep consistent substitutions across all turns. Same entity → same fake.
Billed by characters processed. No seat fees.
Starter
2M chars/month
Growth
100M chars/month
Business
1B+ chars/month