Debugging Any Problem
7.1: The Three Forces
When you're stuck on any problem -- a bug, a new integration, a performance issue -- three forces spinning together will solve it: Research, Logs, and Tests.
Research + Docs
|
_______/ \________
/ \
Logs <--> Tests
\___________________/
|
Solution EmergesEach force feeds the others. Research tells you what to log. Logs reveal what to test. Test failures tell you what to research next.
7.2: Research First
Before touching code, understand the landscape:
"Research the NextAuth.js documentation for:
- How session callbacks work
- The correct way to add custom fields to session
- Common pitfalls when extending the User type"10 minutes of documentation research saves 2 hours of trial and error.
What research gives you:
- Correct API usage
- Known edge cases
- Proven patterns from the community
7.3: Logs Make the Invisible Visible
Add strategic logging at every decision point in your code:
console.log('[Auth] Session callback triggered', {
user: user.id,
token: token.sub,
timestamp: new Date().toISOString()
});
console.log('[Auth] Token before modification:', JSON.stringify(token, null, 2));
console.log('[Auth] Token after modification:', JSON.stringify(newToken, null, 2));What logs give you:
- The actual execution flow (not what you think happens)
- Real data shapes (not what the docs say)
- The exact failure point
You can't fix what you can't see. When something doesn't work, the first question is always: "What is actually happening?" Logs answer that question.
7.4: Tests Lock In Wins
Once you know what the correct behavior should be, write a test:
describe('Session callback', () => {
it('should add user role to session', async () => {
const session = await getSession({ user: mockUser });
expect(session.user.role).toBe('admin');
});
it('should handle missing user gracefully', async () => {
const session = await getSession({ user: null });
expect(session).toBeNull();
});
});What tests give you:
- Confidence the fix actually works
- Protection against the same bug returning
- Documentation of expected behavior
7.5: The Feedback Loop in Action
Here's a real example debugging a payment integration:
Research the Stripe API: how to create checkout sessions, required parameters, webhook verification. Learned: need line_items and success_url.
Write the checkout function with strategic logs. Log output reveals: priceId is undefined.
Research how price IDs work in Stripe. Learned: they're stored in the dashboard, not generated in code.
Write tests: one for invalid price IDs (should throw), one for valid items (should create session). Tests catch the undefined case.
Final implementation has validation, proper data flow, correct error handling. Tests pass. Logs confirm the flow works end-to-end.
7.6: The Debugging Prompt
Use this prompt to invoke the full cycle:
I need to [implement feature / fix bug].
Before writing code:
1. RESEARCH: Look up the documentation for [relevant APIs]
2. Identify common pitfalls and best practices
Then:
3. LOGS: Add strategic console.logs at decision points
4. TESTS: Write tests that verify the expected behavior
Show me all three parts, then implement the solution.7.7: When to Use This Approach
| Situation | Why It Works |
|---|---|
| New integration | Research prevents common mistakes |
| Mysterious bug | Logs reveal what's actually happening |
| Refactoring | Tests ensure you don't break things |
| Performance issue | Logs + research identify bottlenecks |
| Security concern | Research reveals threat models, tests verify fixes |