mental model

context, instruction, constraint, output — the four-layer framework

by chaz johnson · method with ai

every strong AI prompt has four components, whether you planned them or not. the difference between one that works and one that doesn't is usually a missing layer.
once you see them, you can't unsee them.

why prompts fail

most prompts fail for one of two reasons. either they're too vague — Claude doesn't have enough information to give you something specific — or they're too narrow — Claude has the instruction but none of the surrounding context that would make the instruction meaningful.

the four-layer framework solves both. it gives you a mental checklist for what a complete prompt contains, and a way to diagnose what's missing when something comes back wrong.


the four layers

context
layer 1 — context
who are you, what's the situation, what matters here?
context is everything Claude needs to understand the world this request lives in. your role, the project, the audience, the stakes, any background that would change how a thoughtful person would approach this. context isn't throat-clearing — it's the foundation the rest of the prompt builds on. without it, Claude is guessing about who it's helping and why.
instruction
layer 2 — instruction
what do you actually want Claude to do?
the task itself. clear, specific, and stated as an action rather than a question. "write a follow-up email" is an instruction. "what should i say in a follow-up email" is a question — it invites a response that tells you what to do rather than doing it. most prompts have this layer. the problem is usually that it's the only layer.
constraint
layer 3 — constraint
what are the limits, requirements, and things to avoid?
constraints are the guardrails that separate a useful output from a generic one. length, tone, format, things that must be included, things that must not be included. constraints are what tell Claude what kind of email rather than just "write an email." this is the most commonly skipped layer — and skipping it is why outputs so often need multiple rounds of corrections.
output
layer 4 — output
what does a useful response actually look like?
this is describing the shape of success. not just what you want Claude to do, but what you want to receive. "a single paragraph i could paste directly into an email" is an output spec. "three options with trade-offs listed under each" is an output spec. "a bullet list of no more than five items" is an output spec. the clearer this layer is, the less time you spend reformatting or asking Claude to restructure what it gave you.

what it looks like assembled

here's how the four layers come together in a single prompt. each layer is short — the framework isn't about length, it's about completeness.

example prompt — all four layers assembled
layer 1 — context
i'm a freelance web designer. i just finished a spec website for a local barbershop in my area — they don't have a site yet. i sent them the preview link three days ago and haven't heard back. the owner seemed genuinely interested when i reached out initially.
layer 2 — instruction
write a follow-up message i can send via facebook messenger.
layer 3 — constraint
keep it short — two or three sentences at most. don't sound like a salesperson. don't use the phrase "just checking in." don't mention pricing.
layer 4 — output
give me the message ready to copy and paste. no subject line, no sign-off — just the message body.

a prompt with all four layers doesn't leave Claude guessing about who you are, what you want, what the limits are, or what form the answer should take. the result lands closer to usable on the first try.


using it as a diagnostic

the framework is as useful for fixing bad prompts as it is for building good ones. when a response comes back wrong, run through the four layers and ask which one is missing or weak.

generic, surface-level response? usually missing context. Claude defaulted to the most average interpretation of your request because it didn't know enough about your situation.

response that doesn't do the right thing? usually a weak instruction layer — the task wasn't clearly stated as a specific action.

right idea, wrong format or wrong tone? almost always a missing constraint layer. Claude did what you asked, but without the guardrails that would have shaped it toward something actually useful.

output that needs heavy reformatting? missing output spec. you didn't tell Claude what you wanted to receive, so it gave you what seemed reasonable — which isn't always the shape you needed.

"the first output is diagnostic information. if it's wrong, look at which layer was missing — don't just ask again."


you don't always need all four

quick, low-stakes tasks don't require a fully layered prompt. if you're asking Claude to summarize a document you've pasted, or translate a sentence, or explain a concept — a short, direct request is fine. the framework is for situations where the stakes are higher and the output needs to be specific.

use it consciously for a week or two on the prompts that matter most — the ones where you're spending the most time on corrections. the pattern becomes natural fast.
after a while you stop thinking about layers and just write complete prompts by default.

that's when the shift becomes permanent.

go deeper

the full method.
in your inbox.

get the four-layer framework and early access to the full archive.

no spam. just the method.