Data Leakage
Unmasked customer data flowing into third-party LLMs creates irreversible security risks and compromises your organisation's privacy posture.
IO Gate detects and masks PII before text reaches ChatGPT, Claude, or any LLM. Get visibility, control, and zero-retention security.
IO Gate acts as a neutral security layer between your team and every major frontier model.
Data breaches, compliance failures, and privacy violations cost organisations millions
Unmasked customer data flowing into third-party LLMs creates irreversible security risks and compromises your organisation's privacy posture.
Engineering resources wasted on custom masking logic slow product velocity. Centralised protection is required for modern speed.
Manual redaction cannot scale with growing data volumes, leaving teams exposed to accidental disclosure and compliance violations.
A transparent proxy that keeps your AI workflows intact while protecting sensitive data
A request containing sensitive user data is sent from your application to the IO Gate API endpoint.
POST api.iogate.io/v1/chat/completionsIO Gate intercepts the call before it reaches the AI model, logging metadata and initiating the masking pipeline.
{ "entities_detected": 3, "policy": "strict" }Named entities — names, emails, card numbers, addresses — are replaced with semantically neutral placeholders.
"Sarah" → [PERSON_1] "[email protected]" → [EMAIL_1]The sanitised prompt is forwarded to the target LLM. No sensitive data ever leaves your security boundary.
Forwarding to gpt-4o — 0 PII tokens in payloadThe AI response passes back through IO Gate, which re-substitutes original tokens before returning the result.
[PERSON_1] → "Sarah" [EMAIL_1] → "[email protected]"IO Gate sits as a real-time proxy between your users and any AI tool. Every prompt is analyzed, scrubbed, and logged before it leaves your network.
Analyze this support ticket: customer Emma Rodriguez, [email protected], DOB 12/03/1990, disputed charge on card 4111 1111 1111 1111, ref TXN-9920441.
Analyzing payload for PII patterns…
Paste any text to scan for PII — structural patterns + English context NER, running entirely in your browser. No data leaves your device.
Preserves the full semantic meaning of prompts after masking, so AI responses remain accurate and useful.
Low-latency pipeline adds ~200ms overhead per call — invisible to users, transparent to your infrastructure.
Placeholder tokens maintain data shape (name length, address format) to avoid model confusion.
Swap your existing LLM base URL for the IO Gate endpoint. Zero code changes in your application logic.
No message content is ever stored on IO Gate servers. Only aggregate anonymised metrics are retained.
1import OpenAI from 'openai';24const client = new OpenAI({5baseURL: 'https://api.iogate.io/v1',6apiKey: process.env.IOGATE_API_KEY,7});89const response = await client.chat.completions.create({10model: 'gpt-4o',11messages: [{12role: 'user',13content: 'Contact John at [email protected]',14}],15});1617// PII is masked before leaving your infra and18// re-injected in the response. Nothing else changes.
Request early access and we'll be in touch within 24 hours.
Limited Priority Slots Available
// Drop-in: IO Gate acts as a transparent proxy