Security & Cost Control
10.1: The Security Mindset
AI generates code that works but not necessarily code that's safe. Security is the one area where you absolutely cannot trust AI blindly.
High-risk areas where you must manually review every line:
- Authentication and authorization -- who can access what
- Cryptography -- hashing, encryption, token generation
- Secrets management -- API keys, database credentials
- SQL and database queries -- injection vulnerabilities
- File system access -- path traversal, upload validation
- User input handling -- XSS, injection, validation
AI optimizes for functionality, not security. It will happily write code that works for legitimate users but is trivially exploitable by attackers. Security review is always your job.
10.2: The Security Checklist
| Action | Why It Matters |
|---|---|
| Never trust user input | Validate and sanitize everything server-side |
| Use parameterized queries | Prevents SQL injection -- never concatenate user input into queries |
| Hash passwords with bcrypt/argon2 | Never store plaintext passwords |
| Use HTTPS everywhere | Encrypts data in transit |
| Set HttpOnly + Secure on cookies | Prevents XSS from stealing session tokens |
| Rate limit auth endpoints | Prevents brute force attacks |
| Never commit secrets | Check git diff --staged before every commit |
| Validate file uploads | Check type, size, and sanitize filenames |
| Use CSRF protection | Prevents cross-site request forgery |
| Log security events | Know when attacks happen |
10.3: The Two-Pass Review
For any security-sensitive code, review twice:
Pass 1: Does it work?
- Correct logic, proper flow, handles edge cases
Pass 2: Can it be exploited?
- What if the input is malicious?
- What if the user is not who they claim?
- What if the request is forged?
- What if someone intercepts the data?
// AI generated this -- looks fine at first glance
app.get('/api/file', (req, res) => {
const filename = req.query.name;
res.sendFile(`/uploads/${filename}`);
});
// Pass 2 catches: path traversal attack
// GET /api/file?name=../../../etc/passwd <-- reads system files!
// Fixed version:
app.get('/api/file', (req, res) => {
const filename = path.basename(req.query.name); // strips directory traversal
const filepath = path.join('/uploads', filename);
if (!filepath.startsWith('/uploads')) return res.status(403).send('Forbidden');
res.sendFile(filepath);
});10.4: Defense in Depth
Never rely on a single security layer. Stack protections so that if one fails, others still protect:
Layer 1: Input validation (reject bad data at the door)
Layer 2: Authentication (verify who's asking)
Layer 3: Authorization (verify they're allowed)
Layer 4: Parameterized queries (prevent injection even if validation fails)
Layer 5: Encrypted storage (protect data even if database is breached)
Layer 6: Logging & monitoring (detect attacks even if they succeed)10.5: Cost Control for Paid Services
When using paid APIs (Firebase, Stripe, OpenAI, AWS), AI-generated code can accidentally cost you thousands. Three rules:
Rule 1: Show the math
// Firebase Firestore: $0.06 per 100K reads
// Max users: 1,000. Max reads per user per day: 50
// Worst case: 1,000 * 50 = 50,000 reads/day = $0.90/month
// HARD LIMIT: 100,000 reads/day via security rulesRule 2: Limit everything
// Never call an API without an explicit maximum
const MAX_RETRIES = 3;
const MAX_ITEMS_PER_PAGE = 100;
const MAX_API_CALLS_PER_MINUTE = 60;Rule 3: Prevent loops
// AI loves generating recursive code that can trigger itself
// Always add circuit breakers
let callCount = 0;
const MAX_CALLS = 10;
function processWebhook(data) {
if (callCount >= MAX_CALLS) {
console.error('[CIRCUIT BREAKER] Max webhook calls reached');
return;
}
callCount++;
// ... process
}A Firebase function that triggers itself, an OpenAI call in a retry loop without a max, a webhook that re-calls its own endpoint. These have caused real bills of $5,000+ overnight. Always add explicit limits.