The difference between a frustrating AI interaction and a productive one usually comes down to how you ask. Not because AI is picky, but because clear communication produces clear results.
After thousands of prompts generating real production code, here are the techniques that consistently work.
The Foundation: Context Is Everything
AI models don't know your codebase, your constraints, or your preferences—unless you tell them.
Bad Prompt:
Write a function to validate emails
Better Prompt:
Write a TypeScript function to validate email addresses.
Requirements:
- Return a boolean
- Handle edge cases (empty string, null)
- Use a reasonable regex (doesn't need to be RFC-compliant)
- Include JSDoc comment
This is for a Next.js contact form, client-side validation.
Why it works: The second prompt specifies language, return type, edge cases, documentation expectations, and context. The AI doesn't have to guess.
The Techniques
1. Show, Don't Just Tell
When you want code in a specific style, show an example.
Prompt:
Here's how I structure API route handlers in this project:
export async function POST(request: Request) {
try {
const body = await request.json()
// validation
// business logic
// return success
} catch (error) {
// error handling
}
}
Write a similar handler for updating user preferences.
The body will contain: { theme: 'dark' | 'light', notifications: boolean }
Why it works: The model matches your existing patterns instead of inventing its own.
2. Specify the Output Format
Tell the AI exactly what you want back.
Prompt:
Analyze this React component for performance issues.
Output format:
1. Issue: [description]
Severity: [high/medium/low]
Fix: [code snippet]
2. Issue: ...
If no issues found, say "No significant performance issues detected."
Why it works: Structured output is easier to scan and use. You don't have to extract the relevant parts from prose.
3. Provide Constraints
AI defaults to "complete" solutions. Sometimes you want minimal.
Prompt:
Add error handling to this function.
Constraints:
- Don't change the function signature
- Don't add new dependencies
- Keep changes minimal—just wrap the dangerous parts
- Use the project's existing error types from @/lib/errors
Why it works: Without constraints, AI might refactor your entire function. Constraints keep changes focused.
4. Think Step-by-Step (Chain of Thought)
For complex problems, ask for reasoning first.
Prompt:
I need to implement real-time updates for a dashboard.
Before writing code:
1. List 3 possible approaches (WebSocket, SSE, polling)
2. Compare tradeoffs for my use case (Next.js, Vercel hosting, ~100 concurrent users)
3. Recommend one approach with justification
Then provide implementation code for the recommended approach.
Why it works: The reasoning step often catches issues before code is written. You can course-correct after step 2 if the analysis is wrong.
5. Iterate, Don't Start Over
When the first response isn't right, build on it.
First prompt: "Write a date picker component" Response: [some code] Follow-up: "Good start. Now:
- Add keyboard navigation (arrow keys)
- Support disabled dates via a prop
- Match the styling from globals.css using CSS classes"
Why it works: Each iteration adds specificity. The model keeps context from previous turns, so you don't lose progress.
6. Ask for Alternatives
When you're not sure about an approach, ask for options.
Prompt:
I need to implement caching for API responses.
Give me 3 approaches:
1. In-memory (simple, no dependencies)
2. Redis (if I need persistence)
3. CDN/Edge caching (if possible with my stack)
For each: code example, pros, cons, when to use it.
Why it works: You get a landscape of options instead of one potentially wrong answer.
7. Rubber Duck with AI
Use AI as a sounding board for decisions.
Prompt:
I'm deciding between two approaches for user authentication:
Option A: JWT stored in httpOnly cookie
Option B: Session ID in cookie with Redis store
My context:
- Next.js 14 with App Router
- Deployed on Vercel
- Need to support "remember me" functionality
- ~1000 DAU expected
What questions should I be asking myself? What am I not considering?
Why it works: AI is great at surfacing considerations you might miss. It's like having a senior developer ask clarifying questions.
Anti-Patterns to Avoid
1. The Vague Request
❌ "Make this better" ✅ "Improve this function's performance by reducing allocations"
2. The Kitchen Sink
❌ "Write a full authentication system with login, signup, password reset, 2FA, social auth, and admin panel" ✅ Break it into focused requests, one at a time
3. The Assumption
❌ "You know my codebase, so..." ✅ Provide the context every time. AI doesn't remember previous sessions.
4. The Copy-Paste Without Reading
Always read generated code before using it. AI can produce plausible-looking code that's subtly wrong.
My Default Prompt Template
For most code generation tasks, I start with this structure:
[Task]: Brief description of what I need
[Context]:
- Language/framework:
- Where this fits in the codebase:
- Related code/patterns to match:
[Requirements]:
- Specific behavior:
- Edge cases to handle:
- Constraints:
[Output format]:
- Just the code, or code with explanation?
- Any specific formatting?
[Example] (if helpful):
- Input/output examples
- Similar existing code to match
You don't need all sections for simple tasks, but for anything complex, this structure prevents back-and-forth.
Real Examples from This Website
Example 1: Generating a Component
Prompt:
Create a cookie consent banner component for Next.js 14.
Context:
- Using App Router with 'use client' for client components
- Styling with CSS custom properties (--color-bg, --color-text, etc.)
- Need to store consent in localStorage
- Show different defaults for GDPR vs non-GDPR regions
Requirements:
- Floating card at bottom of screen
- Checkboxes for: Essential (always on), Analytics, Marketing
- "Accept All" and "Save Preferences" buttons
- Don't block page interaction
Match the component style from EmailSignup.tsx (I'll paste it below).
Example 2: Debugging
Prompt:
This useEffect runs twice on page load, causing duplicate API calls:
[paste code]
Environment: Next.js 14, React 18, development mode
I understand StrictMode causes double-renders, but this also happens in production.
What's causing this and how do I fix it?
Example 3: Refactoring
Prompt:
Refactor this function to be more readable. Keep the behavior identical.
Constraints:
- No new dependencies
- Keep the same function signature
- Split into helper functions if it improves clarity
- Add comments only where logic isn't obvious
[paste function]
The Skill That Transfers
Prompt engineering isn't about memorizing tricks—it's about clear communication.
The same skills that make you good at prompting AI make you good at:
- Writing technical specifications
- Explaining requirements to teammates
- Documenting your code
- Asking good questions on Stack Overflow
As AI tools evolve, the specific techniques might change, but the underlying skill of precise communication only becomes more valuable.
Next Steps
- Want to see these techniques in action? Read How I Built This Website with Claude Code
- Need tool recommendations? Check My AI Tool Stack in 2025
- New to AI-assisted development? Start with the overview: AI-Assisted Development: A Solo Founder's Toolkit
This post is part of the AI & Automation series. Prompting is a skill—and like any skill, it improves with practice.