Dashboard/Phase 4: Integrations
🔧 Troubleshoot10 min20 XP

Lesson 34: Troubleshoot — MCP and Integration Failures

Day 34 of 50 · ~10 min read · Phase 4: Integrations


You've learned about MCP, IDE integrations, hooks, testing, debugging, extended thinking, and checkpointing. But what happens when things break?

This lesson covers three real scenarios where integrations fail — and how to diagnose and fix them.


Scenario 1: MCP Server Won't Connect

The Problem

You configured the GitHub MCP server and restarted Claude Code. When you try to use it, Claude says:

Error: Unable to connect to GitHub MCP server. 
Connection refused on port 9000.

Claude can't reach the service.

Your Diagnosis

Ask yourself:

  1. Is the server installed? Run:

    which github-mcp
    

    If nothing appears, the server isn't installed.

  2. Is the server running? Some MCP servers run as background processes. Check:

    ps aux | grep github-mcp
    
  3. Is the configuration correct? Check your Claude Code config:

    cat ~/.claude/settings.json
    

    Look for the mcpServers section. Are the credentials valid? Is the command path correct?

  4. Are the credentials fresh? Some services (GitHub, Slack) use API tokens. Did your token expire?

The Fix

Case 1: Server not installed.

npm install -g @anthropic-ai/github-mcp

Or follow the server's installation instructions.

Case 2: Server needs authentication.

  • Generate a new GitHub API token
  • Update your config:
{
  "mcpServers": {
    "github": {
      "command": "github-mcp",
      "args": ["--token", "ghp_your_new_token_here"]
    }
  }
}
  • Restart Claude Code

Case 3: Port conflict. Some MCP servers run on specific ports. If port 9000 is taken:

lsof -i :9000

Either kill the process using that port or reconfigure the server to use a different port.

Concept Reinforcement

MCP servers are external processes. Claude Code connects to them like a client-server relationship. If the connection fails, check:

  • Is the server installed?
  • Is it running?
  • Are credentials valid?
  • Is networking correct (port, localhost, firewall)?

Scenario 2: Hook Runs But Produces Wrong Result

The Problem

You configured a post-edit hook to auto-format Python files:

#!/bin/bash
if [[ "$EDITED_FILE" == *.py ]]; then
  black "$EDITED_FILE"
fi

The hook runs (you see no error), but the file isn't formatted. When you check the file manually, it's still in the old format.

Your Diagnosis

Hooks can fail silently. Debug it:

  1. Does the hook run at all? Add logging:

    #!/bin/bash
    echo "Hook triggered at $(date)" >> ~/hook-debug.log
    if [[ "$EDITED_FILE" == *.py ]]; then
      echo "File is Python: $EDITED_FILE" >> ~/hook-debug.log
      black "$EDITED_FILE" >> ~/hook-debug.log 2>&1
    fi
    
  2. Check the log:

    tail ~/hook-debug.log
    

    This tells you if the hook ran and what it did.

  3. Is black installed? Run:

    which black
    
  4. Can black format the file manually? Try:

    black test.py
    

    If this fails, the file has syntax errors and black can't format it.

  5. Is the file path correct? In the hook, $EDITED_FILE might not be set. Check:

    #!/bin/bash
    if [ -z "$EDITED_FILE" ]; then
      echo "EDITED_FILE is empty!" >> ~/hook-debug.log
    fi
    

The Fix

Most common issue: The environment isn't set up correctly in the hook.

Hooks run in a minimal environment. They don't have access to your shell's PATH or virtual environments.

Instead of:

black "$EDITED_FILE"

Use the full path:

/usr/local/bin/black "$EDITED_FILE"

Or activate your environment:

source ~/.venv/bin/activate
black "$EDITED_FILE"

Second issue: The hook variable isn't what you expect.

Check what variables are available in hooks. Different hook types (PreToolUse, PostToolUse, SessionStart) have different variables available.

Concept Reinforcement

Hooks are deterministic scripts, but they run in an isolated environment. The lesson: test your hooks manually first.

#!/bin/bash
# Test hook by running it directly
bash ~/.claude/hooks/post-edit.sh

If it works manually but not in Claude Code, the issue is the environment or hook configuration, not the script itself.


Scenario 3: Extended Thinking Gives Different Answer Than Normal Mode

The Problem

You ask Claude a complex architectural question in extended thinking mode:

Enable extended thinking. Should we use [Option A] or [Option B]?

Claude spends 30 seconds thinking, then says: "Recommendation: Use Option A for these reasons..."

But when you ask the same question without extended thinking:

Should we use [Option A] or [Option B]?

Claude says: "I'd recommend Option B because..."

Different answers. Which one is right?

Your Diagnosis

This is actually not a bug — it's expected behavior. Extended thinking and normal reasoning can reach different conclusions because:

  1. Extended thinking has more reasoning steps. It explores more options and considers more angles before deciding.

  2. Normal reasoning is faster but shallower. It's more intuitive but less thorough.

  3. Both are Claude, but with different reasoning budgets. More thinking ≠ always better, but for complex decisions it usually is.

Which answer should you trust?

  • Trust extended thinking for: Complex architectural decisions, subtle bugs, multi-system problems, high-stakes choices
  • Trust normal thinking for: Quick feedback, well-understood problems, when you need fast iteration

The Real Issue: Reproducibility

The real problem is if extended thinking is giving you inconsistent results — different answers each time you ask.

Debug this:

  1. Ask the same question 3 times with extended thinking. Do you get the same answer?

    • If yes: Extended thinking is consistent. Normal mode just takes a shortcut.
    • If no: Something's wrong. Claude might not understand the question clearly.
  2. Provide more context. Extended thinking works better with clear constraints:

    We have these constraints:
    - We can't introduce new dependencies
    - We need it to be testable
    - Performance is critical
    
    Should we use Option A or Option B?
    
  3. Ask Claude to explain its reasoning. Have Claude show you the thinking process:

    Use extended thinking to decide, then explain your reasoning step-by-step.
    

The Fix

If extended thinking is inconsistent:

  1. Provide more context. Unclear questions get variable answers.

  2. Use checkpointing. If you're uncertain, fork and explore both approaches:

    claude --fork-session
    

    Compare both implementations. One will likely be obviously better.

  3. Trust testing. The best way to resolve "which approach is better" is to build both and run tests. The code doesn't lie.

Concept Reinforcement

Extended thinking isn't a oracle. It's a reasoning mode with a larger thinking budget. It explores more carefully but isn't guaranteed to be right.

The lesson: Use extended thinking for reasoning, but verify decisions with code and tests.

Build both approaches, run tests, let the code tell you which one works better.


Bonus: Debugging Integration Issues in General

When an integration fails, ask yourself:

  1. Is the external service working? (GitHub API, Slack workspace, database)
  2. Are credentials valid? (tokens, API keys, passwords)
  3. Is networking correct? (ports, firewalls, localhost)
  4. Is the configuration right? (paths, settings, environment variables)
  5. Does it work outside Claude Code? (test the service directly)
  6. Are there logs? (most services log failures; check them)

Nine times out of ten, integration issues are one of those five things.


Your Turn

You've now learned about three classes of failures:

  • MCP server connection issues
  • Hook environment and execution issues
  • Extended thinking giving different results

These represent the main failure modes for Phase 4. When something breaks:

  1. Diagnose calmly
  2. Check the logs
  3. Test the component in isolation
  4. Fix the root cause, not the symptom

You're becoming an expert at using Claude Code. Debugging failures is part of that expertise.


← Back to Curriculum · Phase 5 →