Over the past few months, I’ve been building AI agents. I take an iterative approach to most of what I do: start simple, watch what happens, adjust. Early on, the biggest improvement came from anchoring agents with dense, precise language — a few words about funding model or team size that carry enough implicit meaning to reshape the output. But as I kept iterating, two more things emerged that made a consistent difference: separating context from constraints, and the order I gave them in.

The instructions I give agents fall into two distinct types. The first is narrative — what kind of company we are, what we’re optimizing for, how we think about tradeoffs. It describes the environment the agent is working within. The second is directive — do this, don’t do that, format output this way, here’s what the input looks like. Commands, not descriptions.

From what I’ve seen, neither works well alone. Say I need an appointment scheduler and I only give constraints: use .NET, 30-minute slots, no double-booking, send email confirmations. The agent builds exactly what was asked. But anything not specified, it guesses blind — does it need a cancellation policy? A waitlist? Timezone handling? It has no idea who’s using this or why, so every gap is a coin flip.

My instinct was to keep adding constraints until every gap was covered. It worked — but the instructions bloated, and I was still only covering gaps I thought of.

Narrative alone has the opposite problem. Tell the agent “I run a car dealership, I need an appointment scheduler” and it can infer a lot — service bays, test drives, different appointment types with different durations, maybe a follow-up workflow. But nothing bounds it. Does it build a full CRM? Add inventory integration? The agent has good judgment and no fence around it.

Combine them — “I run a car dealership, I need an appointment scheduler. Use .NET, 30-minute slots, no double-booking, send email confirmations” — and the output changes. The agent makes reasonable calls on things not specified because it knows who it’s building for, and needs fewer explicit constraints because it has enough context to fill reasonable gaps on its own. But the constraints keep it from wandering. Judgment and boundaries, working together.

I start with context — who the company is, what it optimizes for, the domain, the team — and follow with constraints. Each directive lands in a frame that gives it meaning. “Use 30-minute slots” reads differently after “I run a car dealership” than it does in isolation.

Reverse the order and the constraints still get followed — but the agent has no frame for the decisions those constraints didn’t cover. It follows the rules and misses the point.

Both layers — context and constraints — benefit from dense language. But in my experience, narrative is where it has the most leverage. There are many ways to describe a company’s situation, and a well-chosen phrase can do the work of a paragraph. Constraints leave less room — “use .NET” is already as precise as it gets. The narrative layer is where compression pays off most, and where loose language wastes the most context window.