Dashboard/Phase 2: Core Workflows
📖 Concept7 min10 XP

Lesson 12: Multi-File Operations

Day 12 of 50 · ~7 min read · Phase 2: Core Workflows


The Opening Question

You're standing at a crossroads. You want to rename a function that exists in 12 different files. You want to add a new feature that touches the database schema, the API endpoints, the frontend components, and the tests.

You could do each file one by one, manually tracking dependencies and making sure everything stays in sync. It would take hours.

Here's the question: Can Claude handle a task where the work cascades across many files? Can it keep the whole picture in mind and avoid breaking something it edited earlier?

The answer matters because real development rarely happens in a vacuum. Changes are interconnected.


Discovery

Let's understand how Claude handles complexity at scale.

Question 1: What happens when one edit breaks something else?

Imagine Claude is refactoring a React component. It restructures the component tree, changes how props flow through, updates the export.

Somewhere else in the codebase, another file imports that component and expects the old prop structure. Now it's broken.

Claude could:

  1. Edit both files at once, knowing the dependency
  2. Edit one, discover the problem later, then go back and fix the other
  3. Edit the first and completely miss that the second file exists

Which one do you think Claude does?

The answer depends on how the task is framed and how Claude discovers the connection.

If you say "refactor the LoginForm component," Claude might miss that Header.js imports LoginForm. But if you ask "refactor LoginForm wherever it's used across the project," Claude will:

  1. Search for all references to LoginForm using Grep
  2. Read the files that import it
  3. Edit LoginForm and all the calling code in one coordinated effort

Pause and think: Why is searching for all uses before editing better than editing one file and hoping for the best?

The answer is visibility. When Claude sees all 12 files that import LoginForm, it can reason about whether the change breaks any of them. It can plan edits in the right order. It can catch mistakes before they happen instead of discovering them later.

This is where the Glob and Grep tools from Lesson 5 become crucial. They're not just for reading code — they're for discovering the full scope of a change before you make it.

Question 2: How does Claude decide what order to edit files?

Here's a question that reveals deep understanding: should you edit the definition first or the calling code first?

If you're renaming a function, should you:

  • Option A: Rename the function definition first, then update all the places it's called?
  • Option B: Update all the calling code first, then rename the definition?

Think about this. If you do Option A and only partially update the callers, you've broken things. If you do Option B and run tests before the function is renamed, the tests fail.

Better question: Should you do them at all in separate steps?

The answer Claude usually reaches: no, coordinate them all at once.

Claude will:

  1. Search for all references to the function
  2. Plan the edits
  3. Make all the edits in a single coordinated set of changes
  4. Run tests once to verify everything works

This is called an atomic change — either it all succeeds or you reject it all.

Pause and think: How does being able to make coordinated edits across many files change what problems become solvable?

Question 3: What about merge conflicts and cascading failures?

Let's get real about what can go wrong.

You ask Claude to "add logging to every database call." Claude:

  1. Searches for all database calls
  2. Starts editing files
  3. First file works fine
  4. Second file has an unexpected pattern Claude doesn't handle
  5. Third file is in a state where the edit fails
  6. Fourth file is a test file that now fails

What should Claude do?

This is where the agentic loop becomes critical. Claude doesn't barrel forward blindly.

Here's what actually happens:

  1. Claude makes the first few edits
  2. Claude runs the tests or your build
  3. Tests fail (or pass)
  4. Claude sees the error
  5. Claude reasons: "The error in file X is because I didn't handle pattern Y. Let me search for all occurrences of pattern Y and fix them."
  6. Claude makes the additional edits
  7. Claude runs tests again
  8. Repeat until success

This is iterative refinement across files. Claude isn't perfect on the first pass, but it's precise enough to catch its own mistakes and adjust.

The key is that you see this happening. You watch Claude run tests, see an error, read it, and propose a fix. You stay in the loop. You can say "yes, that fix makes sense" or "no, that's not the right direction."

Question 4: How do you verify that multi-file changes work?

When Claude has edited 15 files across your project, how do you know everything is correct?

You could read all 15 diffs. That's tedious and error-prone.

Instead, you lean on automated verification:

  1. Run your test suite. If tests pass, the changes probably work. If they fail, you have a specific error to point Claude to.
  2. Run your linter or type checker. If there are syntax errors or type mismatches, you'll see them immediately.
  3. Try the feature. If it's a user-facing change, actually use it.

Claude usually does step 1 and 2 automatically. It'll run your tests and show you the results. But step 3 (actually trying the feature) is on you.

The pattern is: trust the tests, then verify manually.

Pause and think: Why would automated tests catch some multi-file errors but not others?

Because tests verify behavior, not aesthetics. Tests will catch "this endpoint returns the wrong data" but not "this code is inefficient" or "this naming doesn't match our convention." That's where your human review comes in.


The Insight

Here's what multi-file operations really demand:

Multi-file changes are only safe when Claude can see all the dependencies upfront, edit them in coordination (not sequentially), run tests to catch errors, and iterate when something breaks. The agentic loop is essential — Claude acts, checks, learns from the result, and adjusts. You stay in the loop by reviewing the plan, watching for errors, and guiding Claude when it gets stuck.

The mental model: Imagine a refactoring task as a puzzle with many interlocking pieces. You could move each piece individually and hope they fit together. Or you could see the whole picture first, then move all the pieces at once in a coordinated way, test the fit, and adjust any pieces that don't align. That's multi-file operations done right.


Try It

You're going to experience the full complexity of multi-file changes.

  1. Pick a small refactor in your project:

    • Rename a commonly-used function or class
    • Extract a helper function that's duplicated in 3+ places
    • Move a file and update all imports
    • Add a parameter to a function used in multiple places
  2. Start a conversation: claude

  3. Ask Claude to handle it:

    • "Find all places where the sendEmail function is called and add an optional retryCount parameter. Update the function definition and all callers."
    • Or: "Find everywhere we're parsing dates manually and extract it into a single parseDate utility function."
  4. Watch Claude work. Notice:

    • Does Claude search for all references first, or edit blindly?
    • Does it make edits in a coordinated way?
    • Does it run tests to verify?
    • If tests fail, does it read the error and adjust?
  5. When Claude proposes changes:

    • Ask: "Show me all the files you're going to change" or "What's your plan?" to make sure it found everything
    • After accepting changes, run your tests yourself to verify
    • If tests fail, paste the error and watch Claude diagnose
  6. Reflect: Did Claude catch all the places? Did it get the order right? Did automated tests save you from manual review?

This experience will show you that multi-file operations are powerful but require verification.


Key Concepts Introduced

ConceptDefinition
Atomic changeA coordinated set of edits across multiple files that either all succeed or are rejected together
Dependency discoveryUsing Grep and Glob to find all places that depend on code you're changing before editing it
Cascading editsChanges that trigger a chain of necessary edits in other files
Iterative refinementMaking a set of edits, testing, discovering problems, and adjusting until everything works
Cross-file consistencyEnsuring that changes in one file don't break expectations in other files

Bridge to Lesson 13

You've now tackled multi-file changes, coordinated edits, and iterative refinement. But there's another dimension to coordination that's essential for real projects: version control.

When you've made changes across 15 files, you need a way to track what changed, why it changed, and be able to roll back if something goes wrong.

Next lesson's question: Can Claude be your pair-programming partner for git workflows?

We'll explore how Claude can stage, commit, create branches, write good commit messages, create pull requests, and handle the coordination that git enables.


← Back to Curriculum · Lesson 13 →