Every session with Claude Code starts fresh, with no memory of last week’s project, no recollection of the patterns that worked, no awareness of the constraints you’ve established in other contexts. That’s just how it works: you bring the context, it brings the capability. If you don’t bring enough context, you spend the first part of every session re-establishing ground you’ve already covered.

That was the problem I kept running into. I’d find myself saying things like “follow the same approach we used in the other project” or “you know, I prefer not to do it this way”, except Claude didn’t know, because I hadn’t told it. So I’d explain again, and it would adjust, and we’d move on. And then the same thing would happen in the next session.

The fix I landed on borrows something from agile development: the retrospective. Not a formal ceremony, just a short conversation at the end of a significant session, asking a simple question: what did we learn here, and can we make it stick?

I have a few specific mental triggers that prompt the retro.

  1. When I catch myself saying “follow the same pattern we used on project X.” That phrase is a signal that the pattern is valuable enough to reuse, which means it probably shouldn’t live only in my memory. The follow-up I ask is: would it make sense to turn this into a skill or a global instruction? If the pattern applies beyond a single project, it belongs somewhere Claude can find it without me re-explaining it from scratch every time.
  2. When something goes wrong or was painful to implement. At the end of a session where something caused real friction, I run a brief post-mortem. Not “what went wrong” in the abstract, but questions like “why did you get it wrong?” or “why did it take you so long to reply?”: the goal is to identify the underlying cause, and then ask “is there a rule or instruction that would have prevented it?” Sometimes the answer is a missing constraint that should go into a CLAUDE.md. Sometimes it’s an assumption I should have anticipated. Either way, the outcome of the conversation is a concrete addition or amendment to the relevant instructions, so we don’t repeat the same trouble.
  3. When something goes right. This one is easier to skip, because when a session goes smoothly the natural impulse is to close the tab and move on. But a smooth session often went smoothly for a reason, and that reason is worth capturing. If Claude handled something elegantly, or if a particular approach turned out to be exactly right, I ask: is there a lesson here we could abstract? A practice worth keeping as a standing instruction?
  4. When a project-specific script turns out to solve a general problem. I’ve built more than a few things that started as project-specific solutions and later turned out to address a need I have everywhere: triggering OS notifications when something is pending, downloading email attachments into an Obsidian vault, that kind of thing. The question is always: does this belong to this project, or should we abstract it, put it somewhere globally accessible, and make it available next time a similar need comes up?

Over time, these conversations accumulate into something useful. The global instructions get richer and the skill library grows. The next time I start a session on familiar territory, Claude doesn’t have to discover my preferences from scratch; it already has them. The sessions run faster, the solutions are more consistent with the rest of my setup, and I’m not burning tokens to re-explain constraints I’ve already established a dozen times.

The more important point, though, is that this isn’t a checklist. The specific questions shift depending on what happened in the session. What stays constant is the rhythm: significant session ends, we look back before moving on. Five minutes of reflection at the end saves a lot of repeated setup later.

You can even encode the habit itself. I have a global instruction that triggers at the end of every Pull Request, nudging Claude to ask:

## Prompt a rule review after closing a PR
When a PR is merged or all review threads are resolved, ask:

> "Before we close this out, did any of these fixes reveal a pattern worth turning into a rule?"

Only add a rule if the answer is concrete and actionable. Skip if the lesson is just "be more careful" or already covered by an existing rule.

That way, even if I forget to initiate the retro, Claude does it for me. A good habit is easier to keep when the system helps enforce it.

If you work with Claude Code on anything beyond one-off queries, it’s worth trying. At the end of your next significant session, just ask: “Is there anything from this session worth turning into a rule or a skill?” You might be surprised how often the answer is yes. And if your AI agent sometimes doesn’t work as well as you’d hope or burns more tokens than expected, resist the urge to vent online. That’s just another session worth a retro: identify what went wrong and add a guardrail so it doesn’t happen again.

Leave a Reply

Your email address will not be published. Required fields are marked *