In the previous post, we talked about threat modeling for AI apps — identifying what can go wrong before it does. Today, we’re shifting our focus to something even more foundational: data security.
If you're building or deploying AI systems, your model is only as trustworthy as the data it sees — both during training and at inference time. Mess that up, and it doesn’t matter how good your code is. You’re exposed.
Why Data is the Real Attack Surface
We often treat AI models like black boxes, but the truth is: models learn from what we feed them. If someone can influence the input or training data, they can influence the behavior of the system.
Here are some real risks that come up when handling data in AI workflows:
- Training data leaks — PII, credentials, or business secrets ending up inside model weights.
- Data poisoning — Intentionally malicious inputs designed to skew, bias, or break the model.
- Inference-time attacks — Inputs crafted to extract sensitive data, confuse logic, or cause toxic outputs.
- Logging leaks — Sensitive data accidentally stored in logs during debugging or user tracking.
Best Practices for Training Data
Whether we're training from scratch or fine-tuning on custom data, the first line of defense is how we handle that dataset.
Anonymize user data
Always strip or mask PII (names, emails, phone numbers, etc.) if your training dataset includes real user content. Use placeholder tokens where possible.-
Validate & sanitize
Create a pipeline to clean text before training. Filter:- Profanity or hate speech
- Irrelevant or adversarial samples
- Extreme token length or malformed JSON
You don’t want garbage going into your model.
Limit memorization
If you’re fine-tuning LLMs, set a lower learning rate and enable techniques like differential privacy, shuffling, or dropout to reduce the chances of memorizing specific sequences.Version & audit datasets
Keep track of where your data came from, what changes were made, and who accessed it. Tools like DVC or Weights & Biases artifacts can help here.
Best Practices for Inference-Time Data
Just because the model is trained doesn’t mean you're safe. In fact, most real-world vulnerabilities happen during inference, when users interact with your deployed model.
Input filtering
Sanitize user prompts. Avoid directly passing raw input to the model. Strip HTML, dangerous code, or known injection patterns.Token limits
Impose character or token limits to avoid overloading context windows or hitting memory limits. Truncate long inputs.-
Response monitoring
Use filters to catch and block outputs that:- Include sensitive or unsafe content
- Echo back private data
- Reference forbidden topics
This is especially important if you're generating summaries, completions, or conversational responses.
Avoid logging full user prompts
If you're logging inputs for analytics or debugging, do not store full text unless it's scrubbed. Consider partial logging or masking.
Example: Fine-Tuning with User Support Tickets
Let’s say you’re fine-tuning a model on customer support data to improve auto-reply generation.
Potential risks:
- Names, emails, or private conversations get embedded in weights.
- Toxic or biased language from ticket threads influences output behavior.
Mitigations:
- Pre-process and redact emails (john@example.com → code[EMAIL])
- Use data filtering scripts to exclude edge cases or flagged tickets
- Regularly test outputs for unintended memorization using known samples
Tooling Suggestions
Some open-source tools we can use to help:
- Presidio (Microsoft) – for PII detection and redaction
- Cleanlab – for detecting label errors or outliers
- TextAttack / OpenPrompt – for simulating and testing poisoned inputs
- Datasette – for exploring and sharing datasets with permissioning
If you're using LangChain, LlamaIndex, or RAG pipelines, consider building custom data guards into your retriever or chunking logic.
Final Thoughts
Good AI starts with good data hygiene.
No matter how advanced your model is, if it learns from bad, toxic, or sensitive data — you’re building a liability, not a product.
In the next post, we’ll dive into model-level attacks and defenses — how people break AI systems after deployment, and what you can do to prevent it.
Until then, treat your training and inference data like you would treat passwords: clean, guarded, and never blindly trusted.
Connect & Share
I’m Faham — currently diving deep into AI and security while pursuing my Master’s at the University at Buffalo. Through this series, I’m sharing what I learn as I build real-world AI apps.
If you find this helpful, or have any questions, let’s connect on LinkedIn and X (formerly Twitter).
This is blog post #3 of the Security in AI series. Let's build AI that's not just smart, but safe and secure.
See you guys in the next blog.