The Split-Brain Workflow

Why I Hired Claude Opus as My Architect and Gemini 3 as My Consultant

For a long time, I tried to be "AI monogamous". I wanted one subscription, one chat window and one answer. I spent months trying to force Google Gemini to be a perfect in-line code architect, and just as long trying to get Claude to digest massive libraries of documentation without choking. It was only when I stopped comparing them and started treating them as two distinct employees with radically different job descriptions that my productivity skyrocketed.

The "Rabbit Hole" of the Copy-Paste Era

My journey didn't start with this sophisticated workflow. Like most people, I started with ChatGPT in a browser tab. At first, it felt like magic - I would copy a block of code, paste it into the chat, and get an immediate fix. But as the novelty wore off, I started to notice a recurring and dangerous pattern that I came to call "The Rabbit Hole".

The workflow was clumsy - constantly Alt-Tabbing, copying file contents and stripping out sensitive data - but the real issue was cognitive. I found that the AI tools I was using often suffered from a form of tunnel vision. If I pasted an error message and a snippet of code, it would immediately lock onto a specific theory about what was wrong. If that theory proved to be incorrect, it was nearly impossible to talk it out of it.

I would spend hours in a loop: I would apply the fix, it would fail, I would paste the new error, and the AI would say, "Apologies, I made a mistake," before returning a slightly tweaked version of the exact same broken logic.

It was sycophantic; it wanted to please me so badly that it stopped thinking critically.

We would spiral down this rabbit hole together, obsessing over a missing semicolon or a variable type, only for me to realise two hours later that the problem wasn't in the code I pasted at all - it was a configuration issue in a file I hadn't shared.

That frustration was the catalyst that forced me to look for a better way.

The Split-Brain Solution

I realised that for serious development, relying on a single AI context - especially one disconnected from my environment - was limiting my problem-solving. Today, I have a very specific split.

Claude Opus lives directly inside my Visual Studio and VS Code environment (via the GitHub Copilot), acting as my pair programmer. Meanwhile, Google Gemini 3 stays open in a separate browser window as my external consultant. They never talk to each other directly and that separation is exactly what makes this approach to my workflow powerful.

Claude Opus: The Colleague In The Room

When I am deep in the code, Claude Opus is the colleague "in the room". Because it has access to my open files and repository context, I treat Opus like a Senior Developer sitting at the desk next to me. I don't have to explain variable names, folder structures, or project-specific nuances; it just "sees" them.

I use it for the deep, surgical work - refactoring a specific class, writing unit tests for the function I just highlighted, or explaining a complex legacy method I’ve stumbled upon.

It provides a sense of grounding.

This way, the AI doesn't hallucinate non-existent libraries as often because it can see my package.json or .csproj files right there in the context. It writes code that matches my existing style because it is literally looking at it.

Google Gemini 3: The "Clean Slate" Consultant

However, deep integration has a downside: the aforementioned tunnel vision. Because Opus is so focused on the files I have open, it sometimes loses the forest for the trees.

This is where Google Gemini 3 enters the workflow.

I treat Gemini 3 not as a coder, but as a "Clean Slate" Solution Architect. When I hit a wall - where Opus is just moving bugs around rather than fixing them - I switch to the browser. I paste the error message or describe the architectural problem without dumping my entire codebase into it. This forces me to articulate the problem clearly, and it forces Gemini to think from first principles without being biased by my existing (potentially bad) code.

Its reasoning capabilities and massive context window allow it to suggest completely different approaches that I hadn't considered because I was staring at the same function for three hours.

I can upload a PDF of new documentation, a screenshot of the UI, or even just ask high-level questions such as "I'm trying to implement this pattern in C#, but it feels clunky. Is there a modern alternative?" It acts as a sanity check against the code Opus and I are writing.

The Magic Button Trap

There is, however, a critical danger in this workflow that I’ve learnt to constantly guard against.

It is the temptation to become a "Clipboard Developer" - it is incredibly easy to fall into a rhythm where I paste a prompt, copy the output, and run it without truly parsing what the code is doing.

I’ve done it, and paid the price with obscure bugs that took twice as long to fix because I didn’t write the logic myself.

I have to remind myself that these AIs are tools to help me code, not to replace me coding. Even Claude Opus, with its senior-level reasoning, can confidently suggest a method that doesn't exist. Gemini 3 can misunderstand the nuance of a specific business rule. My role has shifted from being a "writer" of syntax to a "reviewer" of logic. I treat every block of AI-generated code exactly like a Pull Request from a colleague: I read it line by line. I question it. If I don't possess the fundamental knowledge to verify the AI's output, I am not pair programming; I am just gambling with my codebase.

The Rules of Engagement

In summary then, I have developed a coding workflow that shares responsibility for specific tasks between the resources, including me, that best fit the particular point in the workflow:

  • Claude Opus (In-IDE): This is the "Hands-on Developer". Use it for syntax, refactoring, writing tests, and tasks that require awareness of the current file structure.
  • Google Gemini 3 (In-Browser): This is the "External Consultant". Use it for high-level architectural questions, "clean slate" debugging, and researching new patterns without the distraction of your current code.
  • The Separation: Keeping them separate prevents "Context Bias". Sometimes you need an AI that doesn't know how messy your code is to tell you how to fix it.
  • The Golden Rule: You are the Lead Engineer. The AI generates the draft, but you own the release. If you can't explain the code, don't commit it.
Contact Us

Contact Us


Pete Oare

Written by

Pete Oare

Principal Application Development Consultant


Related Reading

Row circle Shape Decorative svg added to top
Row circle Shape Decorative svg added to bottom

Get in touch

If you'd like to know more, get in touch using the form below, call 03333 209 969 or email enquiries@circyl.co.uk.