Integrating AI into Your Development Workflow: Part 2 - Workflow Recipes and Context Templates

Integrating AI into Your Development Workflow: Part 2 - Workflow Recipes and Context Templates
Author’s Note: I’m a Director of Engineering who’s spent the past year exploring AI coding tools with my team, and personally on nights and weekends. I’m not an AI researcher, or a full-time coder these days, but I’ve logged many hours tinkering with these assistants, and discussing pros and cons with developers on our codebase. This guide shares what I’ve learned - the good, the bad, and the practical - for busy developers who want to get up to speed with AI in their workflow.

Why Workflows Matter

Even the best agents can underperform if they’re used without structure. The most effective developers I’ve seen don’t just “talk” to AI, they give it context, define scope, and treat it like a collaborator who benefits from clarity. This section is aimed to point you in the direction of improving your workflow and developing consistent templates to make your approach repeatable.

If you takeaway one thing from this post, it should be that you need to still put in effort if you want to get good results consistently. AI can’t do magic unless you’re a fully engaged partner.

Throughout this post, you’ll see references to prompt templates. These are reusable formats you should develop (and iterate on) to help structure tasks for AI agents. You should build your own, and there’s some basic examples below to point you in the right direction.


1. Beginner Patterns: Task-Based Prompts

Use this when: You’re doing local changes, test scaffolding, or small diagnostics.

  • Setup: Start with a clear, scoped task.
  • Agent: Grok (currently free) or Haiku (fast/cheap).
  • Template:
    • TASK: [One sentence of what you need]
    • CONTEXT: [Code or file contents; the more effort you make there, the better your results]
    • CONSTRAINTS: [What not to change or what to preserve]

Example:

TASK: Add a test case for this edge condition in the calculate_tax function.

CONTEXT: [paste or fully describe the function here, including a thorough explanation of the edge case; ideally list the files that could or should be touched with the test]

CONSTRAINTS: Don’t modify the function body, just write the test.

Pro tip: Always double-check generated code. Assume it needs at least one tweak.


2. Intermediate Patterns: Plan and Execute (Agent Split)

Use this when: A task touches multiple files or steps.

Step 1: Plan with Sonnet-thinking or GPT-4 in Cursor, or Codex in ChatGPT.

Prompt:

“Don’t write code yet. Help me design a plan to implement X. List the steps.”

(In Cursor’s Planning mode, you can skip the “don’t write code” instruction, it won’t execute by default.)

Step 2: Execute each step with Haiku or Composer.

Prompt:

“Implement Step 2 from the plan above. Only modify [filename], do not touch others.”

2b. Use ChatGPT to Shape Contextual Prompts

Once you have a plan, ChatGPT is an excellent partner for refining it into copy-paste-ready prompts for Cursor or Codex. It helps bridge the gap between high-level planning and tactical execution.

Example prompts you can use:

“Here’s the plan. Now turn it into prompts for Cursor using my template.”
“Help me summarize this test suite for the agent.”
“Create a one-shot prompt for refactoring these 3 files.”
“Rewrite this step so I can paste it into Composer with the right context and constraints.”

This handoff stage is where you save the most time; by giving execution agents exactly what they need to succeed the first time.

Why this works: Large models are better at reasoning and scoping. Smaller models are better at code editing. You reduce cost and error by separating the two, and improve success rate by giving the execution agent the context it needs, cleanly structured.


3. Advanced: Structured AI Collaboration

Use this when: You’re tackling a multi-day feature or want to offload routine work.

  • Create a file like plan.md with a checklist of sub-tasks.
  • Use Cursor in executor mode with the right agent, or Codex in ChatGPT, to go item by item.
  • Use test runs as checkpoints. Let the agent retry on failures, but only once or twice.

Pro tip: When using Codex for multi-file changes, paste in a clear folder map and test context. It helps avoid accidental regressions.


4. Pairing with ChatGPT

When you’re not sure how to approach a task, or how to delegate it to one of the other agents, ChatGPT is often the best first stop. Think of it as a fast, always-available senior peer.

  • Brain dump first. You don’t need a perfect prompt. Just describe what you’re working on, and ChatGPT can help you shape it into a plan or a task list.
  • Template creation. You can ask it to generate or fill out templates that you can feed to Cursor, Codex, or even use by hand.
  • Agent selection. It can help decide which agent is best for a task. You can say “Here’s what I’m trying to do, now what model should I use?” and get a sensible answer. Surprisingly, it doesn’t show a lot of bias here either - although it does have a large ego around how much it can help you with your next steps (which is probably well earned).
  • Debugging assistant. It’s particularly strong at helping you narrow down a bug’s cause or figure out what context might be missing for another agent.

It’s not perfect - context can drift, and it may miss subtle relationships - but it’s still one of the best general-purpose copilots available. For early-stage thinking, template filling, sanity checking, and nudging you in the right direction, it’s hard to beat.


5. Debugging with AI

Common AI debugging uses:

  • Scan logs for patterns
  • Explain test failures
  • Rewrite brittle test cases
  • Identify missing context

Prompt Tip:

“Here’s the failing test and log output. What’s the likely cause? Suggest 1-2 fixes and explain what additional logging might help.”

6. Build Prompt Templates You Can Reuse, and Iterate on them Regularly

Everyone’s use cases and approaches to get the best value are going to differ slightly. Build out your own personal or team prompt library, share them, and modify them regularly as you learn what works and what doesn’t. Agents will change and your templates should evolve with them.

Some basics ideas to start the library might include:

"Explain this file"

“Walk me through what this file is doing, section by section. Add comments as needed.”

"Generate a test suite"

“Based on this function and its expected inputs, write a set of 3-5 tests that cover edge cases and expected flows.”

"Rewrite for clarity"

“This logic works but is hard to follow. Rewrite it for readability. Keep the interface and tests passing.”

“Review my test suite for value”

“The test suite is becoming long and brittle. Review the unit tests for the User model and report on any tests that are slow, flaky, or not adding real value, and suggest ways to improve them. Build a plan so they can be addressed.”

Wrapping Up

There’s no one-size-fits-all approach to working with AI agents, but building repeatable workflows, shaping clear prompts, and knowing when to pair planning models with execution models can dramatically improve results.

In the next post, I’ll shift focus to the other half of the equation: cost and complexity. How do you stay efficient and cost effective without losing quality or control?


Part 1: Getting Started with Cursor and Agents is available here.

Part 3: Managing Cost and Complexity is available here.