The Debugging Prompt Pack: 18 Prompts That Help You Find What’s Actually Broken
The Debugging Prompt Pack: 18 Prompts That Help You Find What's Actually Broken
Most debugging is just asking the wrong questions in the right order. These prompts ask the right questions first.
📋 Read time: 14 minutes. Use time: every time something breaks.
Why This Exists
Debugging is the part of product development where time disappears. Not because the bugs are hard. Because most people start debugging by staring at the symptom instead of interrogating the system.
The default move is to paste an error message into ChatGPT, get a plausible-sounding fix, apply it, and watch a different error appear. You're not fixing bugs. You're playing whack-a-mole with your codebase.
Teams that debug well don't just have better tools. They have a structured way of questioning the code before they touch it. They form a hypothesis first. They isolate the variable. They verify the fix didn't introduce a new failure. That's a thinking protocol, not a talent. And you can encode that protocol into prompts.
This pack gives you 18 prompts organized by debugging phase. Use them with any LLM (ChatGPT, Claude, Gemini, whatever you have open right now). Each prompt is designed to make the AI reason carefully instead of guess confidently. There's a big difference.
How to Use This
- Match the prompt to the phase you're in. Don't use a "root cause" prompt when you haven't reproduced the bug yet. Phase order matters.
- Fill in the
[brackets]. Every prompt has placeholders. The more specific you are, the more useful the output. Vague input produces vague debugging. - Treat output as a hypothesis, not a verdict. These prompts help you think faster. You still verify. The code is the ground truth.
- Chain prompts when a bug is deep. Start with Triage, move to Root Cause, close with Regression Prevention. Chaining is where the real time savings happen.
The 18 Prompts
Phase 1: Triage (Before You Touch Anything)
The most expensive debugging mistake is jumping to a fix before you understand the failure. These prompts slow you down in the right way.
Prompt 1: The First Principles Error Read
View prompt
I have this error:
[PASTE FULL ERROR MESSAGE AND STACK TRACE]
Before suggesting any fix, do the following:
1. Explain what this error message literally means in plain English.
2. Identify the specific line or component where the failure originates.
3. List 3-5 distinct possible causes, ordered from most to least likely given this stack trace.
4. Tell me what additional information you'd need to narrow it down further.
Do not suggest a fix yet.
Why it works: Most LLMs jump to solutions. This prompt forces a diagnostic read first. You often find the real cause in step 3 before you even run another test.
Prompt 2: The Reproduction Checklist
View prompt
I'm trying to reproduce this bug reliably before I fix it:
Bug description: [DESCRIBE WHAT BREAKS AND WHEN]
Environment: [LOCAL / STAGING / PRODUCTION]
Frequency: [ALWAYS / SOMETIMES / RARE]
Last known working state: [WHEN DID THIS LAST WORK, IF EVER]
Generate a step-by-step checklist I can follow to reproduce this bug consistently. Include:
- Exact conditions to set up (data state, user role, environment variables, etc.)
- The precise action sequence that triggers the failure
- What "confirmed reproduced" looks like vs. a different bug
- Edge cases that might give false positives
Why it works: If you can't reproduce it reliably, you can't verify your fix worked. This prompt makes reproduction a structured exercise, not a guessing game.
Prompt 3: The Blast Radius Map
View prompt
I found a bug in this part of the codebase:
[PASTE RELEVANT CODE OR DESCRIBE THE MODULE/FUNCTION]
Before I fix it, help me understand the blast radius:
1. What other parts of the system likely depend on this code?
2. What behaviors might change as a side effect of modifying this?
3. What adjacent features should I manually test after a fix?
4. Are there any patterns in this code that suggest the same bug might exist elsewhere?
Why it works: Fixing one bug and shipping two is how trust gets eroded. This prompt makes you think about the fix before you write it.
Prompt 4: The Timeline Interrogation
View prompt
A bug appeared in production. Here's what I know about the timeline:
[DESCRIBE: when it was first reported, what changed recently, any deployments or config changes in the last X days]
Help me reason backward from the bug to likely causes:
1. Based on the timeline, what types of changes could have introduced this?
2. What should I check in the git history or deployment logs?
3. Is this more likely a code change, data change, infrastructure change, or external dependency change?
4. What's the fastest way to test each hypothesis?
Why it works: Most bugs aren't random. Something changed. This prompt treats the timeline as evidence and helps you read it.
Phase 2: Root Cause Analysis
You've reproduced it. Now you need to understand why, not just what.
Prompt 5: The Five Whys Facilitator
View prompt
I've reproduced this bug:
Surface symptom: [WHAT THE USER/SYSTEM EXPERIENCES]
Immediate technical cause: [WHAT LINE OR CONDITION FAILS]
Run a five-whys analysis on this bug. For each "why," push one level deeper into the system. After five levels, tell me:
1. What the root cause is (not the trigger, the root cause)
2. Whether the fix belongs at the surface level or deeper in the stack
3. Whether this root cause could produce other bugs we haven't found yet
Why it works: Most bugs get patched at level one or two. This prompt forces you to level five, where the fix actually sticks.
Prompt 6: The State Interrogator
View prompt
This bug seems to be related to unexpected state. Here's the context:
[DESCRIBE: what state is involved, what it should be, what it actually is, and where state is managed]
Help me trace the state problem:
1. Where does this state originate?
2. What are all the places in the code that can mutate it?
3. What conditions could cause it to be in the unexpected value I'm seeing?
4. Is this a mutation problem, a timing problem, or a scope/reference problem?
5. What's the cleanest place to add a debug log or assertion to catch exactly when the state goes wrong?
Why it works: State bugs are the hardest to find because the failure shows up far from the cause. This prompt builds a map from effect back to origin.
Prompt 7: The Async / Race Condition Probe
View prompt
I suspect this bug might be a timing or race condition issue. Here's the relevant code:
[PASTE CODE]
And here's the behavior: [DESCRIBE THE INCONSISTENT OR INTERMITTENT FAILURE]
Analyze this for concurrency and timing problems:
1. Are there any operations here that assume a specific execution order that isn't guaranteed?
2. Are there any missing awaits, unhandled promise rejections, or unchecked async flows?
3. Is there a shared resource being accessed without proper guards?
4. What's the simplest test I can write to confirm or rule out a race condition?
5. If this is a race condition, what's the right fix pattern for this specific situation?
Why it works: Race conditions are easy to misdiagnose because they look like data bugs. This prompt gives the LLM a specific lens to apply.
Prompt 8: The API / External Dependency Isolator
View prompt
My bug might involve an external API or third-party dependency. Here's what I know:
External service involved: [NAME AND WHAT IT DOES]
What I'm sending: [DESCRIBE THE REQUEST OR CALL]
What I'm getting back: [DESCRIBE THE RESPONSE OR FAILURE]
What I expected: [EXPECTED BEHAVIOR]
Help me isolate whether this is my code or theirs:
1. How do I confirm the external service is behaving correctly independent of my code?
2. What aspects of my integration (auth, payload shape, headers, error handling) are most likely to be wrong?
3. What should I check in the docs or changelog of this dependency?
4. How do I mock or stub this dependency so I can test my code in isolation?
Why it works: Blaming the API is usually wrong. This prompt forces you to prove it before assuming it.
Prompt 9: The Data Integrity Detective
View prompt
I think this bug is caused by bad or unexpected data, not bad code. Here's the context:
[DESCRIBE: what data the system receives, what format/value is causing the failure, where the data comes from]
Help me investigate:
1. What validation or sanitization should exist at the entry point for this data?
2. Is this likely a missing edge case in existing validation, or a missing validation layer entirely?
3. What database query or log check would show me the extent of bad data in production?
4. How do I fix the code defensively so this data shape never causes a failure again?
5. Do I also need to backfill or clean the existing bad data, and if so, what's a safe approach?
Why it works: Data bugs feel like code bugs until you look at the data. This prompt separates the two clearly.
Prompt 10: The Environment Diff
View prompt
This bug happens in [ENVIRONMENT A] but not in [ENVIRONMENT B]. Here's what I know about each:
Environment A (broken): [DESCRIBE: OS, runtime version, env vars, services, config]
Environment B (working): [DESCRIBE: same fields]
Help me find what's different:
1. What categories of difference (config, dependencies, runtime, data, networking) should I check first?
2. What specific things in the description above look like likely culprits?
3. What diagnostic commands or checks would reveal hidden differences I haven't listed?
4. How do I make the two environments more comparable to isolate the variable?
Why it works: "Works on my machine" is a real bug category. This prompt treats environment difference as a structured debugging problem.
Phase 3: Fix Quality
You know what's wrong. Now make sure the fix is actually right.
Prompt 11: The Fix Review Before You Commit
View prompt
Here's the bug I found and the fix I'm planning to apply:
Bug: [DESCRIBE THE ROOT CAUSE]
Proposed fix: [DESCRIBE OR PASTE THE CODE CHANGE]
Before I commit this, review the fix for:
1. Does this fix the root cause or just the symptom?
2. Are there edge cases this fix doesn't handle?
3. Does this fix introduce any new failure modes or security issues?
4. Is there a simpler fix that achieves the same result?
5. Is there anything about this fix that would make it hard to maintain or understand in six months?
Why it works: Most bad fixes look fine in the moment. This prompt adds a second opinion before the code is in production.
Prompt 12: The Test Coverage Gap Finder
View prompt
I just fixed this bug:
[DESCRIBE THE BUG AND THE FIX]
Help me write tests that would have caught this bug before it shipped:
1. What unit test should exist for the function or component I fixed?
2. What integration test would catch this bug at the system level?
3. What edge cases should be in the test suite that clearly weren't there before?
4. Write the test cases in pseudocode or plain language first, then actual test code if useful.
Why it works: Every bug is a test coverage gap. This prompt makes you close it before moving on.
Prompt 13: The Revert Risk Assessment
View prompt
I need to decide whether to ship a fix forward or revert to the last stable state. Here's the situation:
Bug severity: [DESCRIBE IMPACT ON USERS]
Fix complexity: [SIMPLE ONE-LINER / MODERATE REFACTOR / LARGE CHANGE]
Time pressure: [HOW URGENT IS RESOLUTION]
Revert risk: [WHAT WOULD REVERTING UNDO, ANY MIGRATIONS OR DATA CHANGES]
Help me think through this decision:
1. What are the risks of shipping the fix forward vs. reverting?
2. Is there a middle option (feature flag, partial revert, hotfix)?
3. What should I communicate to users or stakeholders while I resolve this?
4. What's the decision framework for this type of situation?
Why it works: Speed pressure makes this decision feel obvious when it isn't. This prompt slows it down enough to make it deliberately.
Phase 4: Regression Prevention
Fixing the bug is step one. Making sure it doesn't come back is step two. Most teams skip step two.
Prompt 14: The Post-Mortem Template Generator
View prompt
I just resolved a production bug. Here are the details:
Bug description: [WHAT BROKE]
Root cause: [WHY IT BROKE]
Time to detect: [HOW LONG FROM INTRODUCTION TO DISCOVERY]
Time to resolve: [HOW LONG FROM DETECTION TO FIX]
Impact: [WHO WAS AFFECTED AND HOW]
Generate a blameless post-mortem document with:
1. Incident summary (2-3 sentences)
2. Timeline of events
3. Root cause analysis
4. Contributing factors
5. What we got right in the response
6. Action items with owners and deadlines (leave [OWNER] as placeholder)
7. Process changes to prevent this class of bug
Why it works: Post-mortems that don't get written don't help anyone. This prompt makes it fast enough that you actually do it.
Prompt 15: The Monitoring Gap Finder
View prompt
This bug existed in production for [DURATION] before we detected it. Here's how we found out:
[DESCRIBE: user report, alert, internal discovery, etc.]
Help me design better detection for this class of problem:
1. What metrics or signals would have surfaced this bug earlier?
2. What logs should exist that don't currently exist?
3. What alert thresholds make sense for this type of failure?
4. What synthetic test or health check would catch this proactively?
5. What's the most important single monitoring improvement to make first?
Why it works: If a user found your bug before your monitoring did, your monitoring has a gap. This prompt closes it.
Prompt 16: The Defensive Coding Audit
View prompt
Here's the code around the bug I just fixed:
[PASTE THE SURROUNDING CODE, NOT JUST THE FIX]
Audit this code for defensive coding gaps:
1. What inputs or states does this code not handle gracefully?
2. Where are the failure modes that have no error handling?
3. What assumptions does this code make that aren't validated?
4. What would happen if a dependency (database, API, cache) became unavailable mid-execution?
5. Suggest specific defensive additions, ordered by risk reduction.
Why it works: One bug fixed, five found. That's what a good defensive audit does. This prompt runs it systematically.
Phase 5: Specialized Situations
Some bugs don't fit clean categories. These prompts handle the hard ones.
Prompt 17: The Performance Regression Investigator