Dashboard/Phase 4: Integrations
🔬 Lab20 min25 XP

Lesson 32: Lab — Test-Driven Feature Development

Day 32 of 50 · ~20 min lab · Phase 4: Integrations


The Mission

Over the past week, you've learned about MCP integrations, IDE integrations, hooks, testing, debugging, extended thinking, and checkpointing. Today, you'll use all of them together.

Your mission: Build a feature using test-driven development, with Claude as your pair programmer.

You'll practice:

  • Writing tests first, then implementation (Lesson 28)
  • Using hooks to auto-validate (Lesson 27)
  • Debugging when something breaks (Lesson 29)
  • Using extended thinking for complex decisions (Lesson 30)
  • Forking to explore alternatives (Lesson 31)
  • Jumping between terminal and IDE (Lesson 26)

What You'll Practice

This lab reinforces these concepts:

ConceptHow You'll Use It
Test-driven developmentWrite tests before asking Claude to implement
Hooks for automationConfigure a hook to run tests after edits
Debugging with ClaudeWhen tests fail, have Claude read the error and fix it
Extended thinkingUse it for architectural decisions in your feature
CheckpointingRewind if you want to try a different approach
IDE vs terminalUse the terminal for complex work, IDE for quick reviews

Setup

Pick a small feature to build. You need something that:

  • Is small enough to complete in ~20 minutes
  • Requires multiple files (to practice multi-file coordination)
  • Has clear test cases
  • Is something you'd actually want in your codebase

Examples:

  • A CSV parsing utility that validates data before returning results
  • A rate limiter for an API (with configurable time windows)
  • A markdown-to-HTML converter with validation
  • A data validation library with custom error messages
  • A simple caching decorator with expiration

Pick something, and let's go.


Step 1: Write Tests First

Start in your terminal:

cd your-project
claude

Say to Claude:

I want to build [your feature]. Here are the requirements:
- [Requirement 1]
- [Requirement 2]
- [Requirement 3]

Can you write comprehensive tests for this feature first? I want tests 
that cover the happy path, edge cases, and error scenarios.

Claude will create a test file. Review it:

ls -la test_*.py  # or test_*.js, etc.
cat test_[feature].py

Ask Claude follow-up questions:

  • "Should we also test for X scenario?"
  • "What about this edge case?"

The goal: You and Claude agree on what "correct" looks like before writing any implementation.


Step 2: Configure a Hook

In your terminal:

mkdir -p .claude/hooks

Create .claude/hooks/post-edit.sh:

#!/bin/bash
# Run tests after any edit
if command -v pytest &> /dev/null; then
  pytest test_*.py -v
elif command -v npm &> /dev/null; then
  npm test
elif command -v cargo &> /dev/null; then
  cargo test
fi

Make it executable:

chmod +x .claude/hooks/post-edit.sh

Now, every time Claude edits a file, the tests will run automatically. You'll see the results in real-time.


Step 3: Implement with Claude

In your Claude Code session, say:

Now implement the feature to make these tests pass. 
I've set up a hook that will run the tests after each edit.

Claude will:

  1. Implement the feature
  2. The hook runs tests automatically
  3. Claude sees the results
  4. If tests fail, Claude reads the error and iterates

Let Claude work. Watch the loop:

[Claude edits files]
→ [Hook runs tests]
→ [Tests fail or pass]
→ [Claude sees results and adjusts]
→ [Repeat until all tests pass]

Step 4: When Tests Fail, Debug Together

Let's say a test fails. Instead of Claude blindly trying fixes, do this:

The tests are failing with this error:
[Paste the error output]

Can you read the test file and the implementation to understand 
what's happening? Then propose a fix.

Claude will:

  1. Read the test
  2. Read the implementation
  3. Trace through the logic
  4. Identify the root cause
  5. Propose a fix

This is debugging with an agent that can see the whole system (Lesson 29).


Step 5: Architect with Extended Thinking

If you hit a decision point — "should we handle this case this way or that way?" — use extended thinking:

claude --thinking=adaptive

Then ask:

We need to handle concurrent requests. Should we use [Option A] 
or [Option B]? Please think through the tradeoffs.

Claude will spend extra time reasoning through the decision. It'll come back with a well-reasoned recommendation.


Step 6: Fork to Compare Approaches

If you're not sure which direction is best, fork:

claude --fork-session

In the fork, try the alternative approach. Then:

  • Compare both versions
  • See which approach has cleaner tests
  • See which one's easier to understand
  • Decide which to keep

This is fearless exploration (Lesson 31).


Step 7: Review and Refine

Once all tests pass, do a code review. In your IDE or terminal:

git diff

Look at what Claude wrote. Ask:

  • Is it clean?
  • Are variable names clear?
  • Are there comments where needed?
  • Does it follow your project's style?

If you want refinements, ask Claude:

The implementation works, but can you:
- Improve variable names
- Add docstrings
- Extract a helper function

Claude will iterate until you're happy.


Reflect

Once your feature is complete and tested, answer these questions:

  1. Testing first: How did having tests written first change the implementation process? Did Claude have fewer false starts?

  2. Hooks: Did the automated test hook save you from catching bugs manually? How much did it accelerate the feedback loop?

  3. Debugging: When tests failed, how did Claude's approach (read code, trace logic, propose fix) compare to what you'd do manually?

  4. Extended thinking: Did the extra reasoning on architectural decisions lead to better code? Was the extra time worth it?

  5. Forking: If you explored an alternative approach, did comparing both versions help you make a better decision?

  6. Multi-tool workflow: Did you switch between terminal and IDE? Which context was better for which part of the work?


Bonus Challenge

Once your feature works:

  1. Add error handling. Ask Claude to add comprehensive error handling and edge cases. Watch the tests guide the implementation.

  2. Optimize for performance. Ask Claude to benchmark the feature and suggest optimizations. Use extended thinking for this.

  3. Add documentation. Ask Claude to write a README or usage examples for the feature.

  4. Refactor for clarity. Ask Claude to refactor the code for maximum readability. Make sure tests still pass (they're your safety net).


Summary

You've just experienced the full Claude Code workflow:

  • Tests first for clarity and confidence
  • Hooks for automated validation
  • Debugging with whole-codebase reasoning
  • Extended thinking for hard decisions
  • Forking for fearless exploration
  • Multi-context work (terminal, IDE, conversation)

This is how professionals use Claude Code. Not as a one-shot code generator, but as a pair programmer in a disciplined workflow.


← Back to Curriculum · Lesson 33 →