The Code Review Prompt Pack: 15 Prompts That Help You Read Code Like You Wrote It
Heemang Parmar
The Code Review Prompt Pack: 15 Prompts That Help You Read Code Like You Wrote It
You don't need to write all the code. You need to be able to read all of it.
๐ Read time: 14 minutes. Use time: every time you touch a codebase you didn't build alone.
Why This Exists
Most early-stage founders and solo builders fall into the same trap. They ship fast, delegate to contractors or AI tools, and assume that if it works, it's fine. Then six months later, they're debugging a system they don't understand, onboarding a new dev who asks questions they can't answer, or watching a critical feature break because nobody knew that one function had three silent dependencies.
Code review isn't a gatekeeping ceremony for senior engineers at big companies. It's how you stay the person who understands your own product. Even if you're not the one writing every line, you are responsible for every line. That distinction matters more than most founders realize until it's too late.
The teams that stay fast past the first few months aren't necessarily the ones with the cleanest code. They're the ones where the founder or lead can pick up any file, read it in three minutes, and know whether something is wrong. This prompt pack closes that gap. You bring the code. These prompts bring the questions a senior engineer would ask.
How to Use This Pack
- Paste the code block, then the prompt. Don't describe the code in the prompt. Paste the actual code. The model works from what's there, not what you remember about it.
- Use these in sequence for new codebases. Start with understanding (prompts 1-4), move to structure (5-9), then risk (10-13), then debt (14-15). That order mirrors how a good engineer actually reads a codebase.
- Use them selectively for PRs and contractor work. If you're reviewing a specific pull request or a contractor's submission, jump to the prompts that match what you're unsure about.
- Save the outputs. The best use of these isn't a one-time pass. Paste the AI's analysis into a doc next to the file. Future-you will be grateful. So will your first hire.
The Prompts
Section 1: Understanding What You're Looking At ๐
These four prompts are for when you open a file and don't fully know what it does. That happens more than founders admit.
Prompt 1: Plain-English File Summary
Read this code and explain what it does in plain English.
Assume I understand the product context but not the implementation details.
Tell me: what problem is this code solving, what are its inputs and outputs,
and what would break in the product if this file disappeared?
[paste code]
When to use it: Before any review. You need a baseline understanding before you can judge quality.
What to look for in the response: If the explanation is vague or circular ("it handles data processing"), the code probably has a naming or structure problem. Good code explains itself when described honestly.
Prompt 2: Dependency Map
Look at this code and list every external dependency it relies on:
libraries, environment variables, database tables, API endpoints, other files or functions.
For each one, tell me whether this code would fail silently or loudly if that dependency
were unavailable or changed.
[paste code]
When to use it: After shipping a contractor's work or after an AI tool generated a full module.
What to look for in the response: Silent failures are the dangerous ones. If a dependency goes missing and your code returns an empty array instead of throwing an error, you won't know until a user notices something missing.
Prompt 3: Entry and Exit Points
Identify every entry point into this code (where it gets called)
and every exit point (what it returns or mutates).
For each exit point, describe the exact shape of the output and any conditions
that would change that shape. Flag any exit point that returns different types
depending on conditions.
[paste code]
When to use it: Functions that get called from many places, or any code that feeds data into your UI.
What to look for in the response: Functions that return null in some conditions and an object in others are a common source of frontend bugs. This prompt surfaces that before it surfaces in production.
Prompt 4: What the Author Assumed
Read this code and list every assumption the author made about the environment,
the data, or the caller. For each assumption, tell me what would happen
if that assumption were wrong.
[paste code]
When to use it: Reviewing code written by someone else, or code you wrote more than two weeks ago.
What to look for in the response: Assumptions aren't bugs by default. Undocumented assumptions are. If the model finds five assumptions and none of them are in a comment or a test, that's a problem.
Section 2: Structure and Design ๐๏ธ
These prompts move from "what does it do" to "how well is it built." You don't need to know what good architecture looks like to use them. The model will tell you.
Prompt 5: Responsibility Check
Does this code do more than one thing? List every distinct responsibility
this file or function has. Then tell me which responsibilities belong together
and which should be separated. Be specific about what you would split out and why.
[paste code]
When to use it: Any file over 200 lines. Any function that takes more than four parameters. Any function with the word "and" in its name.
What to look for in the response: If the model lists four or more distinct responsibilities, that file will be painful to modify without breaking something. That's not a style preference. That's a maintenance cost you'll pay repeatedly.
Prompt 6: Naming Audit
Review the variable names, function names, and file names in this code.
For each one that is ambiguous, generic, or misleading, tell me:
what does the name suggest, what does it actually do,
and what would be a more honest name?
[paste code]
When to use it: Whenever you find yourself needing to trace through logic to understand what a variable holds. Good names make that tracing unnecessary.
What to look for in the response: Pay particular attention to boolean names. isValid, flag, temp, data are almost always signs of rushed code. This matters more in a small codebase where there's no institutional knowledge to fill the gaps.
Prompt 7: Error Handling Audit
Identify every place in this code where something could go wrong:
network calls, parsing, database queries, external API calls, user input.
For each one, describe how the error is currently handled.
Flag any that swallow errors silently, return undefined, or fail without logging.
[paste code]
When to use it: Before releasing any feature that touches external data or user input.
What to look for in the response: Empty catch blocks and bare try/catch without logging are the most common issues. If your error handling doesn't tell you what broke and where, you'll be debugging blind when something goes wrong in production.
Prompt 8: Hardcoded Values Scan
Find every hardcoded value in this code: strings, numbers, URLs, credentials,
configuration values, timeouts, limits. For each one, tell me whether it should
be an environment variable, a constant, a config value, or a database-driven value.
Explain why.
[paste code]
When to use it: Before your first production deployment. Again before any contractor submits a PR.
What to look for in the response: Hardcoded API keys are the obvious red flag, but hardcoded business logic values are the sneaky ones. A timeout of 30 seconds hardcoded in three places means changing it requires finding all three places. You won't find all three.
Prompt 9: Duplication Check
Look at this code and identify any logic that appears more than once,
either within this file or that seems like it might exist elsewhere in a typical codebase.
For each duplicated pattern, describe what it's doing and suggest a single place
it could live instead.
[paste code]
When to use it: When reviewing code that was written quickly or by multiple people over time.
What to look for in the response: Duplication is a debt multiplier. Every bug you fix in one copy, you'll forget to fix in the other. The model won't catch duplication across files it can't see, but it will catch it within what you paste.
Section 3: Risk and Security โ ๏ธ
You don't need a security background to use these prompts. You need to know the right questions to ask.
Prompt 10: Trust Boundary Audit
In this code, identify every place where data crosses a trust boundary:
user input entering the system, data coming from an external API,
values read from a database, or parameters passed from a client.
For each one, tell me whether the data is validated before use
and what an attacker could do if it's not.
[paste code]
When to use it: Any code that handles forms, API routes, or data fetching.
What to look for in the response: The model will often surface SQL injection risks, unvalidated query params, and missing auth checks. These aren't exotic attack vectors. They're the most common ones for early-stage products.
Prompt 11: Sensitive Data Exposure Check
Scan this code for any place where sensitive data might be exposed:
logged to the console or a logging service, returned in an API response,
stored in a client-accessible location, or included in an error message.
List each instance and rate the severity.
[paste code]
When to use it: Before shipping any auth-related code, payment flows, or user data APIs.
What to look for in the response: Console.log statements with user objects are extremely common in fast-shipped code. They don't feel dangerous in development. They feel dangerous when your logs are forwarded to a third-party service and someone queries them.
Prompt 12: Race Condition and State Risk
Does this code involve any async operations, shared state, or sequential steps
that depend on each other completing in order? If so, identify every scenario
where those operations could complete out of order or fail partway through.
Describe the user-visible consequence of each scenario.
[paste code]
When to use it: Any code with async/await, concurrent operations, or multi-step writes.
What to look for in the response: The most dangerous class here is partial writes: step one succeeds, step two fails, and your data is now in a half-updated state with no cleanup logic. That scenario is almost never covered in happy-path testing.
Prompt 13: What Would You Need to Know to Change This Safely?
Imagine you are a new engineer joining a team and this is the first time
you are seeing this code. What would you need to know before you could
safely make a change to it without breaking something?
List every implicit contract, every undocumented constraint,
and every hidden side effect you found.
[paste code]
When to use it: Before handing any module to a new developer, a contractor, or an AI coding tool.
What to look for in the response: If the list is long, the code is fragile. Not necessarily buggy today, but one confident change away from breaking. This prompt makes the hidden fragility visible.
Section 4: Technical Debt and Longevity ๐
Technical debt isn't always a problem to fix now. It's always a problem to know about.
Prompt 14: Debt Triage
Review this code and identify technical debt: shortcuts taken,
patterns that won't scale, missing abstractions, workarounds,
or things that work now but will cause pain when the system grows.
For each item, categorize it as: fix now, fix before scaling, or acceptable trade-off.
Explain each categorization.
[paste code]
When to use it: End-of-sprint reviews, before major feature additions, or any time the code "works but feels wrong."
What to look for in the response: The distinction between "fix before scaling" and "acceptable trade-off" is where judgment lives. A shortcut in a feature nobody uses is different from a shortcut in your auth flow. Use the model's output as a starting point, then apply your own product context.
Prompt 15: The Six-Month Question
Assume this codebase is 10x larger in six months.
What breaks first? What becomes the hardest to change?
What decisions made in this code will constrain future decisions the most?
Be specific about which lines or patterns create those constraints.
[paste code]
When to use it: Quarterly architecture reviews, or any time you're about to make a pattern choice that will be copied across the codebase.
What to look for in the response: The patterns that get copied are the dangerous ones. A singleton pattern, a direct database call inside a component, or an auth check copy-pasted into twelve routes: these start as one-off decisions and end up as architectural constraints. This prompt surfaces those before they replicate.
Common Pitfalls
Pasting only the "interesting" part of the code. Context is how these prompts work. A function that looks clean in isolation might be dangerous given what calls it. Paste generously, including surrounding functions and imports.
Treating the model's output as a final verdict. The model doesn't know your product constraints, your team's skill level, or your runway. It gives you the questions. You make the judgment calls.
Skipping this step because the code "passes tests." Tests check that code does what the author expected. Code review checks whether what the author expected was the right thing. Those are different questions.
Reviewing code once and never again. Code ages. A file that was clean six months ago may have four contributors and three workarounds in it now. Build a habit, not a one-time pass.
Using these prompts as performance theater. Some founders run reviews to feel responsible rather than to learn anything. If you're not reading the output carefully and pushing back when something doesn't make sense, you're not getting the value.
Letting "AI wrote it" lower your review bar. Code generated by Cursor, Copilot, or any AI tool still needs review. AI tools optimize for plausible code, not correct code for your specific context. The same prompts apply.
Not saving what you find. The output of a good review is institutional knowledge. If it lives only in a chat window, it's gone in a month. Paste the findings into a decisions doc, a Notion page, or a comment in the file.
Why We Built This
At ProductOS, the core belief is that knowing what to build is becoming more valuable than knowing how to build it. That's true. But there's a corollary: as more of the "how" gets automated, the founders and builders who stay in control are the ones who can read what's being built and ask the right questions. Code review is one of the highest-leverage places to do that.
This prompt pack reflects how we think about the product development lifecycle. Tools like Cursor, Lovable, Bolt, and v0 are genuinely good at generating code. They start at "how to build." They don't start at "should this be built this way" or "what breaks when this scales" or "who can safely change this six months from now." Those questions have to come from somewhere. This pack gives you somewhere to start.
ProductOS is built to carry product thinking, research, design decisions, and context all the way through to deployed code